Questions tagged with Backup & Recovery
Content language: English
Sort by most recent
Has anyone found an easier way to recover accidentally deleted objects in Storage Gateway? Our organization uses Storage Gateway as network storage drive and I'm anticipating someone will unintentionally delete a folder or files. What might be the easiest way to recover that data without using CLI or other scripting?
My thought was to create another mirror S3 bucket that is a one way backup of the production storage gateway S3 bucket. I'd include intelligent-tiering to move objects older than say 90 days to cold storage or outright delete them. This bucket would be used as the recycle bin in case objects or folders were deleted in the production bucket. This would allow me a clean and user friendly window to recover these objects/folders.
You might ask why not recover the objects or folders using CLI or other methods? Well, I'm not a pro AWS admin and only superficially support the services for data storage and management for my company.
Also, flipping the versioning view on does allow me to see delete markered objects in AWS console, but it's messy and recovering deeply nested objects or folders seem to be difficult with the console.
I'm trying to restore data in EFS from recovery points managed by AWS Backup. It seems AWS Backup does not support destructive restores and will _always_ restore to a directory in the target EFS file system, even when creating a new one.
I would like to sync the data extracted from such a recovery point to another volume, but right now I can only do this manually as I need to lookup the directory name that is used by the `start-restore-job` operation (e.g. `aws-backup-restore_2022-05-16T11-01-17-599Z`).
Looking through the documentation I can't find either of:
- an option to set the name of the directory used
- the value of directory name returned in any call (either `start-restore-job` or `describe-restore-job`)
I have also checked how the name of the directory maps to the `creationDate` and `completionDate` of the restore job but it seems neither match.
Is there any way for me to do one of these two things? Both of them missing make restoring a file system from a recovery point in an automated fashion very hard.
I had setup AWS Backup some time ago for RDS and, as far as I could tell, it was correct. I was able to restore from it in my tests (from both vaults - one for production with WORM setup, and one non-production vault for resources that are used for development). The backups were also running correctly - I would see them being made each day for the Snapshots and Continuous also looked correct.
Just recently - mid last week - I have not been able to see or access backups. When I view the Backup Vaults as the root user of the account, I no longer see any backups from a certain date forward (Creation Time doesn't go past May 12th). When I look at the specific protected resources, I can see creation times sooner than that, but the backups are all prefixed with `rds:` rather than `awsbackup:`. Given that AWS Backup is managing the backups (or should be) this doesn't make sense to me.
Additionally, when I actually go to attempt a restore from the console with ANY of these, I receive an error that says: "Failed to retrieve snapshot."
Again, I am using this as a root user and it did work previously. Very concerned about this. I would appreciate any help or guidance.
I am making a Snapshot of a mounted volume to a path called /home/ec2-user/restore - when I use the web interface all works as expected.
When I use AWS CLI to create a **new** volume from the snapshot and mount the **new** volume to /home/ec2-user/restore again, the new volume from the snapshot has all the files that are supposed to be there, but the files have nothing in them.
Do you know why a snapshot would omit all the data within the files?
I am running the free t2.micro instance. Since last night I have been unable to connect to the instance using any method. I've tried using PuTTY, FileZilla, and the built-in console, but none work. My dashboard says the server is online and connectable though. I can't ask support directly because of the aforementioned free tier. What should I do? I at least need to connect with FileZilla and backup my server data before wiping the instance, but I'd prefer to not do that.
Nothing was changed on the server before I was kicked off it.
I am trying to extend the retention of already made backups in FSx for Windows. AWS Backup can have unlimited retention but in the console for FSx you can only do a maximum of 90 days. For existing backups can they be copied and then categorized as a user-initiated backup and then use the Custom Backup schedule [1] extend the retention based on whats defined? Importing the Automatic backups defined in the FSx console would be ideal but I have not found a way to do this.
[1] - https://docs.aws.amazon.com/fsx/latest/WindowsGuide/additional-info.html
RDS event / Log shows
Emergent Snapshot Request: Databases found to still be awaiting snapshot.
What are the backup and recovery options available for Babelfish for Aurora PostgreSQL?
Hi everyone! I'd like to understand better the billing composition regardless of AWS Backup on DynamoDB resources since I got an unexpected increase in my billing.
I'm aware of AWS Backup billing itself thanks to the [documentation](https://aws.amazon.com/backup/pricing/), anyway, when I access the Billing service I can notice an exponential billing pricing in DynamoDB service, on the section `Amazon DynamoDB USE1-TimedBackupStorage-ByteHrs` the description allows me to see that I'll be paying $0.10 per GB-month of storage used for on-demand backup, showing me that I've used 14,247.295 GB-Month (This makes sense with the billing I got) but where my doubt comes from is, **where does all those GB come from?** The last snapshot-size just shows 175.5 GB
I've configured my backup plan with the following parameters:
```
{
"ruleName": "hourly-basis",
"scheduleExpression": "cron(0 * ? * * *)",
"startWindowMinutes": 60,
"completionWindowMinutes": 180,
"lifecycle": {
"toDeletedAfterDays": 30
}
}
```
I'm also copying snapshots into a second region on `us-west-2`
As you can see, I'm handling a schedule expression on an hourly basis backup because of compliance requirements. *Is this enough justification for the high billing?* I'm aware that backups with low RPO are commonly expensive but I just want to be sure that this billing is not higher than it should be because of any wrong Backup configuration.
Thanks in advance!
I configured AWS Backup in CDK to enable continuous backups for s3 buckets with this configuration :
- backup rule : with `enableContinuousBackup: true` and `deleteAfter 35 days`
- backup selection : with `resources` array having the ARN of the bucket directly set and roles setup following the docs of aws : https://docs.aws.amazon.com/aws-backup/latest/devguide/s3-backups.html
Later I deleted the stack in CDK and ,as expected, all the resources were deleted except for the vault that was orphaned.
The problem happens when trying to delete the recovery points inside the vault, I get back the status as `Expired` with a message `Insufficient permission to delete recovery point`.
- I am logged in as a user with AdministratorAccess
- I changed the access policy of the vault to allow anyone to delete the vault / recovery point
- even when logged as the root of the account, I still get the same message.
---
- For reference, this is aws managed policy attached to my user : `AdministratorAccess` , it Allows (325 of 325 services) including AWS Backup obviously.
- Here's the vault access policy that I set :
```
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": [
"backup:DeleteBackupVault",
"backup:DeleteBackupVaultAccessPolicy",
"backup:DeleteRecoveryPoint",
"backup:StartCopyJob",
"backup:StartRestoreJob",
"backup:UpdateRecoveryPointLifecycle"
],
"Resource": "*"
}
]
}
```
Any ideas what I'm missing here ?
**Update ** :
- A full week after creating the backup recovery point, and still unable to delete it.
- I tried deleting it from the AWS CLI but no luck.
- I tried suspending the versioning for the bucket in question and tried, but no luck too.
We are considering a move of our corporate data to AWS (that will then reside in a PostgreSQL DB), however we would like to maintain a backup of the data to one of our own servers residing outside the AWS cloud (in our own data center). Is this possible?
I deleted the workmail 3 months ago, and now I would like to recover the emails, is it possible?