By using AWS re:Post, you agree to the Terms of Use
/AWS Backup/

Questions tagged with AWS Backup

Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Cannot create S3 Backup using AWS Backup

I am trying to make an S3 Backup using AWS Backup. The error message I'm getting is (I have deliberately changed the bucket name and account number) ``` Unable to perform s3:PutBucketNotification on my-bucket-name-123 The backup job failed to create a recovery point for your resource arn:aws:s3:::my-bucket-name-123 due to missing permissions on role arn:aws:iam::123456789000:role/service-role/AWSBackupDefaultServiceRole. ``` I have attached the inline policies described in the [documentation](https://docs.aws.amazon.com/aws-backup/latest/devguide/s3-backups.html) to AWSBackupDefaultServiceRole (note: the role also contains the AWS managed policy AWSBackupServiceRolePolicyForBackup as well as the following) ``` { "Version":"2012-10-17", "Statement":[ { "Sid":"S3BucketBackupPermissions", "Action":[ "s3:GetInventoryConfiguration", "s3:PutInventoryConfiguration", "s3:ListBucketVersions", "s3:ListBucket", "s3:GetBucketVersioning", "s3:GetBucketNotification", "s3:PutBucketNotification", "s3:GetBucketLocation", "s3:GetBucketTagging" ], "Effect":"Allow", "Resource":[ "arn:aws:s3:::*" ] }, { "Sid":"S3ObjectBackupPermissions", "Action":[ "s3:GetObjectAcl", "s3:GetObject", "s3:GetObjectVersionTagging", "s3:GetObjectVersionAcl", "s3:GetObjectTagging", "s3:GetObjectVersion" ], "Effect":"Allow", "Resource":[ "arn:aws:s3:::*/*" ] }, { "Sid":"S3GlobalPermissions", "Action":[ "s3:ListAllMyBuckets" ], "Effect":"Allow", "Resource":[ "*" ] }, { "Sid":"KMSBackupPermissions", "Action":[ "kms:Decrypt", "kms:DescribeKey" ], "Effect":"Allow", "Resource":"*", "Condition":{ "StringLike":{ "kms:ViaService":"s3.*.amazonaws.com" } } }, { "Sid":"EventsPermissions", "Action":[ "events:DescribeRule", "events:EnableRule", "events:PutRule", "events:DeleteRule", "events:PutTargets", "events:RemoveTargets", "events:ListTargetsByRule", "events:DisableRule" ], "Effect":"Allow", "Resource":"arn:aws:events:*:*:rule/AwsBackupManagedRule*" }, { "Sid":"EventsMetricsGlobalPermissions", "Action":[ "cloudwatch:GetMetricData", "events:ListRules" ], "Effect":"Allow", "Resource":"*" } ] } ``` This to me, looks correct and it not should be giving that error. Is there a bug? Or is there a step which is not described in the documentation? I would really appreciate some help. Many thanks ``` ```
0
answers
0
votes
3
views
AWS-User-6439945
asked 8 days ago

AWS Backup DynamoDB billing

Hi everyone! I'd like to understand better the billing composition regardless of AWS Backup on DynamoDB resources since I got an unexpected increase in my billing. I'm aware of AWS Backup billing itself thanks to the [documentation](https://aws.amazon.com/backup/pricing/), anyway, when I access the Billing service I can notice an exponential billing pricing in DynamoDB service, on the section `Amazon DynamoDB USE1-TimedBackupStorage-ByteHrs` the description allows me to see that I'll be paying $0.10 per GB-month of storage used for on-demand backup, showing me that I've used 14,247.295 GB-Month (This makes sense with the billing I got) but where my doubt comes from is, **where does all those GB come from?** The last snapshot-size just shows 175.5 GB I've configured my backup plan with the following parameters: ``` { "ruleName": "hourly-basis", "scheduleExpression": "cron(0 * ? * * *)", "startWindowMinutes": 60, "completionWindowMinutes": 180, "lifecycle": { "toDeletedAfterDays": 30 } } ``` I'm also copying snapshots into a second region on `us-west-2` As you can see, I'm handling a schedule expression on an hourly basis backup because of compliance requirements. *Is this enough justification for the high billing?* I'm aware that backups with low RPO are commonly expensive but I just want to be sure that this billing is not higher than it should be because of any wrong Backup configuration. Thanks in advance!
0
answers
0
votes
1
views
AWS-User-2991569
asked 14 days ago

Expired s3 Backup Recovery Point

I configured AWS Backup in CDK to enable continuous backups for s3 buckets with this configuration : - backup rule : with `enableContinuousBackup: true` and `deleteAfter 35 days` - backup selection : with `resources` array having the ARN of the bucket directly set and roles setup following the docs of aws : https://docs.aws.amazon.com/aws-backup/latest/devguide/s3-backups.html Later I deleted the stack in CDK and ,as expected, all the resources were deleted except for the vault that was orphaned. The problem happens when trying to delete the recovery points inside the vault, I get back the status as `Expired` with a message `Insufficient permission to delete recovery point`. - I am logged in as a user with AdministratorAccess - I changed the access policy of the vault to allow anyone to delete the vault / recovery point - even when logged as the root of the account, I still get the same message. --- - For reference, this is aws managed policy attached to my user : `AdministratorAccess` , it Allows (325 of 325 services) including AWS Backup obviously. - Here's the vault access policy that I set : ``` { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "*" }, "Action": [ "backup:DeleteBackupVault", "backup:DeleteBackupVaultAccessPolicy", "backup:DeleteRecoveryPoint", "backup:StartCopyJob", "backup:StartRestoreJob", "backup:UpdateRecoveryPointLifecycle" ], "Resource": "*" } ] } ``` Any ideas what I'm missing here ? **Update ** : - A full week after creating the backup recovery point, and still unable to delete it. - I tried deleting it from the AWS CLI but no luck. - I tried suspending the versioning for the bucket in question and tried, but no luck too.
0
answers
0
votes
1
views
Anis
asked 18 days ago

Design questions on asg, backup restore, ebs and efs

Hi experts, We are designing to deploy a BI application in AWS. We have a default setting to repave the ec2 instance every 14 days which means it will rebuild the whole cluster instances with services and bring back it to last known good state. We want to have a solution with no/minimal downtime. The application has different services provisioned on different ec2 instances. First server will be like a main node and rest are additional nodes with different services running on them. We install all additional nodes same way but configure services later in the code deploy. 1. Can we use asg? If yes, how can we distribute the topology? Which mean out of 5 instances, if one server repaves, then that server should come up with the same services as the previous one. Is there a way to label in asg saying that this server should configure as certain service? 1. Each server should have its own ebs volume and stores some data in it. - what is the fastest way to copy or attach the ebs volume to new repaves server without downtime? 2. For shared data we want to use EFS 3. for metadata from embedded Postgres - we need to take a backup periodically and restore after repave(create new instance with install and same service) - how can we achieve this without downtime? We do not want to use customized AMI as we have a big process for ami creation and we often need to change it if we want to add install and config in it. Sorry if this is a lot to answers. Some guidance is helpful.
1
answers
0
votes
5
views
AWS-User-4880625
asked a month ago

AWS Backup for AWS Organizations IAM Configuration Issue

I am having issues setting up the required IAM access for cross account backups. As I understand the requirements there are four places to configure IAM access: Source Account (management account) Backup Vault Source Account (management account) Resource Assignment Target Account Backup Vault Target Account IAM access role From the AWS Backup Developer Guide p162 I understand that the IAM roles in the Source and Target accounts, Backup Vaults, and the Backup Vault permissions need to match. I have the following configured: Source Account Backup Vault Access – “Allow Access to Backup Vault from Organisation” Source Account Resource Assignment – Role with default policy called “AWSBackupOrganizationAdminAccess” Target Account Backup Vault Access - “Allow Access to Backup Vault from Organisation” Target Account IAM access role - Role with default policy called “AWSBackupOrganizationAdminAccess” I have followed the setup guide to enable cross account backups for my AWS organization. When I run a backup job for an EC2 server in the target account I get the following error: Your backup job failed as AWS Backup does not have permission to describe resource <aws ec2 arn> I assume that somewhere I do not have the IAM access configured correctly. As there are four places where I can configure IAM access how do I track down where the issue is?
1
answers
0
votes
4
views
Simon Cox
asked a month ago

CloudFormation - Importing existing AWS Backup

Hi, I have an existing AWS Backup setup for Aurora, which I created via the console UI. I have now put together a cloudformation template for that which I'd like to import - I'm following through the import with existing resources wizard, but hitting an error I'm unable to understand. After selecting the new template I am asked to enter on the UI AWS::Backup::BackupVault - BackupVaultName AWS::Backup::BackupPlan - BackupPlanId AWS::Backup::BackupSelection - Id On entering these value and then hitting next a few times to get to the final screen. It will load for a few moments calculating the change set and then say "Backup Plan ID and Selection ID must be provided" Although I do enter those values during the wizard. Any suggestions? Thanks Template below - This work all as expected if the Backup Plan does not currently exist ``` AWSTemplateFormatVersion: 2010-09-09 Description: >- Create RDS Backup Parameters: OnlyCreateVault: Description: This is for the DR region. Only other required parameters are Environment and CostAllocation Type: String Default: false AllowedValues: [true, false] DestinationBackupVaultArn: Type: String ResourceSelectionIamRoleArn: Type: String ResourceSelectionArn: Description: Comma separated list of resource ARNs Type: String CostAllocation: Type: String AllowedValues: - 'Dev' - 'Demo' - 'Test' - 'Live' Environment: Type: String AllowedValues: - 'develop' - 'testing' - 'testenv' - 'demo' - 'live' - 'dr' Conditions: CreateAllResources: !Equals [!Ref OnlyCreateVault, false] Resources: Vault: Type: AWS::Backup::BackupVault DeletionPolicy: Delete Properties: BackupVaultName: !Sub backup-vault-${Environment}-rds-1 BackupVaultTags: CostAllocation: !Ref CostAllocation Plan: Condition: CreateAllResources Type: AWS::Backup::BackupPlan DeletionPolicy: Delete Properties: BackupPlan: BackupPlanName: !Sub backup-plan-${Environment}-rds-1 BackupPlanRule: - RuleName: !Sub backup-rule-${Environment}-daily-1 CompletionWindowMinutes: 720 CopyActions: - DestinationBackupVaultArn: !Ref DestinationBackupVaultArn Lifecycle: DeleteAfterDays: 7 EnableContinuousBackup: true Lifecycle: DeleteAfterDays: 35 StartWindowMinutes: 120 ScheduleExpression: cron(0 1 ? * * *) TargetBackupVault: !Sub backup-vault-${Environment}-rds-1 - RuleName: !Sub backup-rule-${Environment}-weekly-1 CompletionWindowMinutes: 720 CopyActions: - DestinationBackupVaultArn: !Ref DestinationBackupVaultArn Lifecycle: DeleteAfterDays: 35 EnableContinuousBackup: false Lifecycle: DeleteAfterDays: 42 StartWindowMinutes: 120 ScheduleExpression: cron(0 1 ? * * *) TargetBackupVault: !Sub backup-vault-${Environment}-rds-1 - RuleName: !Sub backup-rule-${Environment}-monthly-1 CompletionWindowMinutes: 720 CopyActions: - DestinationBackupVaultArn: !Ref DestinationBackupVaultArn Lifecycle: MoveToColdStorageAfterDays: 365 EnableContinuousBackup: false Lifecycle: DeleteAfterDays: 365 StartWindowMinutes: 120 ScheduleExpression: cron(0 1 ? * * *) TargetBackupVault: !Sub backup-vault-${Environment}-rds-1 BackupPlanTags: CostAllocation: Ref: CostAllocation ResourceSelection: Condition: CreateAllResources Type: AWS::Backup::BackupSelection DeletionPolicy: Delete Properties: BackupPlanId: !Ref Plan BackupSelection: IamRoleArn: !Ref ResourceSelectionIamRoleArn Resources: !Split [",", !Ref ResourceSelectionArn] SelectionName: !Sub backup-resource-${Environment}-rds-1 ```
0
answers
0
votes
3
views
AWS-User-3645846
asked a month ago

How to build a mechanism to govern multiple AWS data locking features?

**Background** There is identified need to govern multiple data locking features that AWS Provides in a context of multi-account environment with independent teams. If there is no governance - data locking might be enabled in various AWS accounts (in various regions) causing potential compliance nightmare and related challenges to rollback if data is accidentally locked for multiple years. It seems the only way to exit from compliance mode data locking is to fully close the related AWS account ( data seems then to be deleted after 90 days, even when locked). Optimally the use of AWS locking features would be allowed only by exception (after human review of each use-case). Governance mode could be by default allowed for all accounts/resources, but it should be possible to prevent the use of compliance mode (in any AWS service that provide data locking) with SCPs in AWS Organization. It has been identified at least these three are related operations for data locking: * backup:PutBackupVaultLockConfiguration * glacier:CompleteVaultLock * s3:PutBucketObjectLockConfiguration **Questions** 1. To deny all AWS data locking features - what IAM actions need to be denied with SCP - in addition to to the ones above? 2. Is the only way to exit the Backup Vault lock is to close the related AWS account (with 90 days grace period)? 3. How can one confirm the deletion of data related to question above. The assumption is that data remains until grace period has passed (90 days). Does AWS emit some logs (when account is being closed) that prove that data has been actually wiped? 4. How one can list what various data locks are currently in use? Is Cloudtrail the only option? 5. Are there any other best practise to share - to centrally govern the various AWS data locking features?
0
answers
0
votes
4
views
AWS-User-3014227
asked a month ago

AWS Backup VSS snapshot fails

I am backing up about 45 Windows Server EC2 instances with AWS Backup. One of the AWS Backup jobs, for about 35 of those instances does a VSS snapshot as part of the backup. I get a lot of VSS failure messages. Some of them are VSS timeouts, which I understand is a Windows issue that occurs because of an unconfigurable 10 second max time for the snapshot to complete. Some are related to the AWS VSS provider. In AWS Backup the error is "Windows VSS Backup Job Error encountered, trying for regular backup". The job then completes, but without a VSS snapshot. In SSM, the Run Command error for this task is: Encountered unexpected error. Please see error details below Message : The process cannot access the file 'C:\ProgramFiles\Amazon\AwsVssComponents\vsserr.log' because it is being used by another process. I tried to rename this file (just as a test, to see if it was in use) and says it is in use by the ec2-vss-agent.exe. So I stopped the EC2 VSS Windows service but that did not stop the ec2-vss-agent.exe process and the error remained. I did an 'end task' on the ec2-vss-agent.exe process and I then manually ran the VSS Run Command from SSM. It re-started the process, and it ran for awhile before timing out, which is the other (unrelated?) issue we see too. I can not find anything online about this issue or error and I'm at a loss as far as where to look from here. I need VSS snapshots of these servers. If anyone has any ideas about how to troubleshoot this or what else to look for, please let me know!
1
answers
0
votes
5
views
jhallock
asked 5 months ago

Redshift snapshots - incremental/full and retention

Hi, For compliance reasons, my client requires two types of backup - a daily backup with 35 day retention (although I assume they read the docs and decided on 35 days as it's the limit...) but also monthly full backup which is kept for 2 years (24 months). I'm a little confused by the documentation - Firstly, the snapshots are incremental but each one can be used to do a full restore to a new cluster - how is this possible? Is it incremental since the last snapshot and if so, what happens if the previous snapshots are deleted? Secondly, I can see there's a limit of 20 snapshots (which you can request to change). Before I even consider manual monthly snapshots, if my automated snapshot is daily and is retained for 35 days I am going to have >35 automated snapshots at any point in time - will this be an issue? Thirdly, if my DWH size is 24TB and I am somehow (?) able create a full database backup via snapshots, I'm going to be paying for the storage of 576 TB (24 months x 24TB) in S3 which will be at a very high cost. Ideally, we'd be able to store this in Glacier but I understand we don't have access to the S3 bucket containing snapshots. So my questions are: - How is a full cluster restore performed from an incremental snapshot? - Is this still possible if there's only one incremental snapshot (all the others are deleted?) - Will I exceed the 20 snapshot limit by having a 35 day retention period? - Is it possible to create a "full" backup in a snapshot? - Is it possible to access snapshot locations on S3 so that we can move them to Glacier and still make them available to Redshift for restore? Thanks in advance, J
1
answers
0
votes
0
views
redshiftjls
asked 5 years ago
  • 1
  • 90 / page