Questions tagged with Backup & Recovery

Content language: English

Sort by most recent

Browse through the questions and answers listed below or filter and sort to narrow down your results.

I've just tested RDS MariaDB snapshot export to S3. The snapshot is good, a test restore re-creates the database correctly. However, exporting that snapshot to S3 appears to succeed but no data is exported! The job summary and the metadata stored in the bucket both say the test DB was skipped, and the metadata claims the DB is empty (it isn't). I'm working with a minimal test case and following the [AWS Console process as documented by AWS](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ExportSnapshot.html), including export all data (not a partial export). However, the resulting S3 export contains only the JSON meta-data files; no data appears in S3. **Is this a known problem with RDS MariaDB service?** export_tables_info_MyDbName-snapshot-test-2-after-stopping_from_1_to_1.json: ``` { "perTableStatus": [ { "warningMessage": { "skippedDatabase": [ { "reason": "DATABASE_IS_EMPTY" } ] }, "target": "MyDbName" } ] } ```
0
answers
0
votes
53
views
jeffsw
asked a year ago
The snapshots of our databases, created by Automated Backups, are set to retain the tags associated to the database, and on the primary snapshot, they do. The replicated automated backups do not retain the tags. Is there a way to set it so that the replicated automated backups retain the original databases' tags?
2
answers
0
votes
73
views
asked a year ago
Hello, As I know, we can create a replication in any EFS to replicate the data to another EFS in another region. Is this approach available for Replication across accounts ? If not, what the best practice to replicate EFS accross accounts? Thanks
1
answers
0
votes
999
views
Maan
asked a year ago
I am having issues setting up the required IAM access for cross account backups. As I understand the requirements there are four places to configure IAM access: Source Account (management account) Backup Vault Source Account (management account) Resource Assignment Target Account Backup Vault Target Account IAM access role From the AWS Backup Developer Guide p162 I understand that the IAM roles in the Source and Target accounts, Backup Vaults, and the Backup Vault permissions need to match. I have the following configured: Source Account Backup Vault Access – “Allow Access to Backup Vault from Organisation” Source Account Resource Assignment – Role with default policy called “AWSBackupOrganizationAdminAccess” Target Account Backup Vault Access - “Allow Access to Backup Vault from Organisation” Target Account IAM access role - Role with default policy called “AWSBackupOrganizationAdminAccess” I have followed the setup guide to enable cross account backups for my AWS organization. When I run a backup job for an EC2 server in the target account I get the following error: Your backup job failed as AWS Backup does not have permission to describe resource <aws ec2 arn> I assume that somewhere I do not have the IAM access configured correctly. As there are four places where I can configure IAM access how do I track down where the issue is?
1
answers
0
votes
423
views
asked a year ago
In the last two weeks, we have been seeing a database instance resetting once or twice a day. Looking at the recent event logs shows the following event, which notes the reset: > MySQL restart initiated to address MySQL induced log backup issues. Note that as part of this resolution, a DB Snapshot will be performed after MySQL completes restarting. We have not added any new logging to our database instance or changed any settings in the last two weeks. We are not sure what could be causing log backup issues. How can we root cause this issue to drive to a solution?
1
answers
0
votes
70
views
asked a year ago
Hi, I have an existing AWS Backup setup for Aurora, which I created via the console UI. I have now put together a cloudformation template for that which I'd like to import - I'm following through the import with existing resources wizard, but hitting an error I'm unable to understand. After selecting the new template I am asked to enter on the UI AWS::Backup::BackupVault - BackupVaultName AWS::Backup::BackupPlan - BackupPlanId AWS::Backup::BackupSelection - Id On entering these value and then hitting next a few times to get to the final screen. It will load for a few moments calculating the change set and then say "Backup Plan ID and Selection ID must be provided" Although I do enter those values during the wizard. Any suggestions? Thanks Template below - This work all as expected if the Backup Plan does not currently exist ``` AWSTemplateFormatVersion: 2010-09-09 Description: >- Create RDS Backup Parameters: OnlyCreateVault: Description: This is for the DR region. Only other required parameters are Environment and CostAllocation Type: String Default: false AllowedValues: [true, false] DestinationBackupVaultArn: Type: String ResourceSelectionIamRoleArn: Type: String ResourceSelectionArn: Description: Comma separated list of resource ARNs Type: String CostAllocation: Type: String AllowedValues: - 'Dev' - 'Demo' - 'Test' - 'Live' Environment: Type: String AllowedValues: - 'develop' - 'testing' - 'testenv' - 'demo' - 'live' - 'dr' Conditions: CreateAllResources: !Equals [!Ref OnlyCreateVault, false] Resources: Vault: Type: AWS::Backup::BackupVault DeletionPolicy: Delete Properties: BackupVaultName: !Sub backup-vault-${Environment}-rds-1 BackupVaultTags: CostAllocation: !Ref CostAllocation Plan: Condition: CreateAllResources Type: AWS::Backup::BackupPlan DeletionPolicy: Delete Properties: BackupPlan: BackupPlanName: !Sub backup-plan-${Environment}-rds-1 BackupPlanRule: - RuleName: !Sub backup-rule-${Environment}-daily-1 CompletionWindowMinutes: 720 CopyActions: - DestinationBackupVaultArn: !Ref DestinationBackupVaultArn Lifecycle: DeleteAfterDays: 7 EnableContinuousBackup: true Lifecycle: DeleteAfterDays: 35 StartWindowMinutes: 120 ScheduleExpression: cron(0 1 ? * * *) TargetBackupVault: !Sub backup-vault-${Environment}-rds-1 - RuleName: !Sub backup-rule-${Environment}-weekly-1 CompletionWindowMinutes: 720 CopyActions: - DestinationBackupVaultArn: !Ref DestinationBackupVaultArn Lifecycle: DeleteAfterDays: 35 EnableContinuousBackup: false Lifecycle: DeleteAfterDays: 42 StartWindowMinutes: 120 ScheduleExpression: cron(0 1 ? * * *) TargetBackupVault: !Sub backup-vault-${Environment}-rds-1 - RuleName: !Sub backup-rule-${Environment}-monthly-1 CompletionWindowMinutes: 720 CopyActions: - DestinationBackupVaultArn: !Ref DestinationBackupVaultArn Lifecycle: MoveToColdStorageAfterDays: 365 EnableContinuousBackup: false Lifecycle: DeleteAfterDays: 365 StartWindowMinutes: 120 ScheduleExpression: cron(0 1 ? * * *) TargetBackupVault: !Sub backup-vault-${Environment}-rds-1 BackupPlanTags: CostAllocation: Ref: CostAllocation ResourceSelection: Condition: CreateAllResources Type: AWS::Backup::BackupSelection DeletionPolicy: Delete Properties: BackupPlanId: !Ref Plan BackupSelection: IamRoleArn: !Ref ResourceSelectionIamRoleArn Resources: !Split [",", !Ref ResourceSelectionArn] SelectionName: !Sub backup-resource-${Environment}-rds-1 ```
0
answers
0
votes
53
views
asked a year ago
Hello, Currently, I set the redshift auto-snapshot retention period to 1 day. The redshift version is ra3.xplus. As far as I know, redshift auto-snapshots are automatically deleted when the retention period is exceeded in redshift. However, if you check the snapshot in the redshift console, the snapshot still exists even after the retention period has expired. Can someone please assist with this issue?
2
answers
0
votes
109
views
asked a year ago
I am creating backup vaults in AWS Backups for our various resources (RDS, S3, and others). It's going well. However, I was curious - if I create two backup vaults for a single resources (e.g. Snapshots Vault, Continuous Vault) is that inefficient? Is there reasons I should/should not do it that way - additional costs, inefficiency in backup, something else? (Note: The above is just an example. I have more specific reasons why I might break it out that way, but I didn't want to spell all that out here.)
1
answers
0
votes
93
views
asked a year ago
Does the Organizations level functionality for AWS Backup have to be managed from the Organizations Master Account? I want to be able to delegate the AWS Backup management to a separate AWS account in my Organization.
1
answers
0
votes
88
views
asked a year ago
We have a couple of contact flows created in our amazon connect instance. We built our contact flows and were trying to take back up of the flows and we are receiving an error as "The number of blocks or resources in the flow exceeds the maximum allowed, or the file exceeds 1MB.". I understand that this is a limitation with in amazon connect. We wanted to set up dev and UAT enviornment for same contact flows in a different instance of Amazon Connect account. Can you please advise how the call flows can be exported when the "The number of blocks or resources in the flow exceeds or the file exceeds 1MB" Also, what are the possibilities for implementing a CI/CD pipe line for our future developments. We are expecting the initial work to be done in the Dev enviornment and later promote to QA, UAT and after the testing phase promote to the Production instance. All environments are in different aws accounts or at least a couple of them AWS account #1 - Dev and QA AWS account #2 - UAT AWS account #3 - Production Is there a cloud formation templates or stacks can be useD?
1
answers
0
votes
326
views
asked a year ago
I have recently enabled AWS Backup for S3 on several buckets to test how it works. I am using tags to select the appropriate buckets. Everything works great. From that point, I then added the tag to our largest bucket (~1.1TB and ~8MM objects) so that it could get its first backup. The first night the backup ran, but did not complete. The status of the backup is: Expired and the error message is `Backup job failed because there was a running job for the same resource`. The backup plan includes Daily, Weekly, Monthly for data retention purposes, and I have not had this issue with any of the smaller buckets. This is the only backup plan on this S3 bucket so I don't understand that error message. Is it possible that it's taking so long it clashes with the next Daily backup? Is there any other way that I can take that initial backup so that all subsequent backups are incremental? (Do a manual backup that expires in 5 weeks or something - would all automated backups after be incremental?)
1
answers
0
votes
815
views
asked a year ago
Hi Everyone, Last year, We moved one of our main reporting systems to Elasticseach 7.9 , its located in N. California region. In this region only 2 AZ is available, in our domain we have dedicated 3 master nodes. Now we are looking for some disaster recovery options. I read https://docs.aws.amazon.com/opensearch-service/latest/developerguide/disaster-recovery-resiliency.html, and it looks with our current setup we have 50/50 downtime possibility. so far I came up with two options 1- cross cluster replication(which will double our cost) 2- continuous snapshot backup. Below are my questions 1- Is there any other possible option for disaster recovery that I haven't considered? 2- For snapshot backup, is there an easy way to schedule/manage the backups, so we can bring up the domin in shortest period of time and with less data loss.
1
answers
1
votes
87
views
asked a year ago