Questions tagged with Backup & Recovery
Content language: English
Sort by most recent
We're using centrally managed backup policies in our AWS Organization to backup our data via AWS Backup. This works flawlessly for all resources except for s3 buckets.
When I create the same backup plan in one of the member accounts and specify, that the resource type is s3, it works.
I've checked our CloudTrail log and somehow AWS Backup does not include s3 when searching for resources with the specified tag.
Here is the `GetResources` event when the job is run by the backup plan of the organization:
```json
{
"eventVersion": "1.08",
"userIdentity": {
"type": "AssumedRole",
"principalId": "XXXXXXXYYYYYYYZZZZZZ:AWSBackup-AWSServiceRoleForBackup",
"arn": "arn:aws:sts::123456789012:assumed-role/AWSServiceRoleForBackup/AWSBackup-AWSServiceRoleForBackup",
"accountId": "123456789012",
"accessKeyId": "ASIA4ROB5DISLEP4KV7D",
"sessionContext": {
"sessionIssuer": {
"type": "Role",
"principalId": "XXXXXXXYYYYYYYZZZZZZ",
"arn": "arn:aws:iam::123456789012:role/aws-service-role/backup.amazonaws.com/AWSServiceRoleForBackup",
"accountId": "123456789012",
"userName": "AWSServiceRoleForBackup"
},
"webIdFederationData": {},
"attributes": {
"creationDate": "2022-08-17T10:41:44Z",
"mfaAuthenticated": "false"
}
},
"invokedBy": "backup.amazonaws.com"
},
"eventTime": "2022-08-17T10:41:44Z",
"eventSource": "tagging.amazonaws.com",
"eventName": "GetResources",
"awsRegion": "eu-central-1",
"sourceIPAddress": "backup.amazonaws.com",
"userAgent": "backup.amazonaws.com",
"requestParameters": {
"paginationToken": "",
"tagFilters": [
{
"key": "BackupPlan",
"values": [
"OrganizationDailyBackupPlan"
]
}
],
"resourcesPerPage": 100,
"resourceTypeFilters": [
"dynamodb:table",
"ec2:volume",
"rds:db",
"storagegateway:gateway",
"elasticfilesystem:file-system",
"rds:cluster",
"ec2:instance",
"fsx:file-system",
"fsx:volume"
]
},
"responseElements": null,
"requestID": "e37c2f72-f088-42ab-b1c7-0bc4d8e07dc1",
"eventID": "72f91800-6225-49e6-8a34-5ac56581f936",
"readOnly": true,
"eventType": "AwsApiCall",
"managementEvent": true,
"recipientAccountId": "123456789012",
"eventCategory": "Management"
}
```
And here is the `GetResources` event when the job is run by the backup plan that was created inside the member account:
```json
{
"eventVersion": "1.08",
"userIdentity": {
"type": "AssumedRole",
"principalId": "XXXXXXXYYYYYYYZZZZZZ:AWSBackup-AWSServiceRoleForBackup",
"arn": "arn:aws:sts::123456789012:assumed-role/AWSServiceRoleForBackup/AWSBackup-AWSServiceRoleForBackup",
"accountId": "123456789012",
"accessKeyId": "ASIA4ROB5DISPULAFFWS",
"sessionContext": {
"sessionIssuer": {
"type": "Role",
"principalId": "XXXXXXXYYYYYYYZZZZZZ",
"arn": "arn:aws:iam::123456789012:role/aws-service-role/backup.amazonaws.com/AWSServiceRoleForBackup",
"accountId": "123456789012",
"userName": "AWSServiceRoleForBackup"
},
"webIdFederationData": {},
"attributes": {
"creationDate": "2022-08-17T10:27:06Z",
"mfaAuthenticated": "false"
}
},
"invokedBy": "backup.amazonaws.com"
},
"eventTime": "2022-08-17T10:27:06Z",
"eventSource": "tagging.amazonaws.com",
"eventName": "GetResources",
"awsRegion": "eu-central-1",
"sourceIPAddress": "backup.amazonaws.com",
"userAgent": "backup.amazonaws.com",
"requestParameters": {
"paginationToken": "",
"tagFilters": [
{
"key": "BackupPlan",
"values": [
"OrganizationDailyBackupPlan"
]
}
],
"resourcesPerPage": 100,
"resourceTypeFilters": [
"s3"
]
},
"responseElements": null,
"requestID": "78798635-8a5a-4012-acbb-2bcda6e910c8",
"eventID": "90bc2e81-2423-44e6-b041-f561c98dd086",
"readOnly": true,
"eventType": "AwsApiCall",
"managementEvent": true,
"recipientAccountId": "123456789012",
"eventCategory": "Management"
}
```
So the only difference is `resourceTypeFilters`.
**So, why does the backup plan generated by the organizations backup policy exclude S3?**
**EDIT:** The backup selection of the backup plan that was generated from the organizations backup policy does not specify any resources at all:
```json
{
"BackupSelection": {
"SelectionName": "tf-organization-daily-backup-selection",
"IamRoleArn": "arn:aws:iam::123456789012:role/tf-backup-role",
"Resources": [],
"ListOfTags": [
{
"ConditionType": "STRINGEQUALS",
"ConditionKey": "BackupPlan",
"ConditionValue": "OrganizationDailyBackupPlan"
}
],
"NotResources": [],
"Conditions": {
"StringEquals": [],
"StringNotEquals": [],
"StringLike": [],
"StringNotLike": []
}
},
"SelectionId": "ee883d39-7528-313b-8b72-54de063d5cf0",
"BackupPlanId": "orgs/d67a7e29-20b5-3e2b-98a7-24a42ca1a2aa",
"CreationDate": "2022-08-17T14:56:07.810000+02:00"
}
```
While the selection for the test plan does specify, that all s3 arns are allowed.
```json
{
"BackupSelection": {
"SelectionName": "test",
"IamRoleArn": "arn:aws:iam::123456789012:role/tf-backup-role",
"Resources": [
"arn:aws:s3:::*"
],
"ListOfTags": [],
"NotResources": [],
"Conditions": {
"StringEquals": [
{
"ConditionKey": "aws:ResourceTag/BackupPlan",
"ConditionValue": "OrganizationDailyBackupPlan"
}
],
"StringNotEquals": [],
"StringLike": [],
"StringNotLike": []
}
},
"SelectionId": "ffa87c07-e463-42a1-9086-f45109fec02f",
"BackupPlanId": "2e3367c9-9d9a-446e-9feb-3a4c1ba0b7d3",
"CreationDate": "2022-08-17T12:18:01.314000+02:00",
"CreatorRequestId": "26592555-4a3c-4fc2-a73f-25b3a4473519"
}
```
Hello, I’m looking to automatically restore DB snapshot from the backup for our MSSQL Instance. The requirement that it should automatically restore to same end point e.g,. Myapp-db.AWS.Amazon.com. How do I achieve this?
Hello.
How can I see the size of a specific S3 backup?
I have looked, in the AWS console Backup Dashboard at the Recovery Point under "Protected Resources", In the Backup Vault (in each Recovery Point), and in the Jobs history. This information (size and transferred bytes) is not shown in any of them.
I have set up EventBridge and sending the logs to CloudWatch but the log events of both the "Recovery Point State Change", and the "Backup Job State Change", always show: "backupSizeInBytes": "0", "bytesTransferred": "0". I know for sure that the S3 bucket has changed since the last backup, new large objects (GBs) were added.
So, I can't find a proof that the backups are actually doing anything and what is the size they occupy.
I can't even see what is the size of the first, complete, backup.
I have tried using the CLI to describe the recovery points and backup jobs the backupSizeInBytes is always 0.
Thanks.
Hello,
Like the subject line says we are looking to backup 45 TB OnPrem physical server on Ubuntu OS to AWS and the ability to restore the services in AWS (for DR purpose)
We have old development data that is archived on this server and are looking to have an offsite (AWS) backup and are ok with 24 hrs. RTO and RPO.
If we an get the backup to S3 with deep archive that would be an ideal solution.
Can you please advise what would be the best and cost-efficient way to go about this use case?
We would prefer to use tools or solutions that can be configured out of the box within AWS and possibly avoid manual scripting
I'm trying to schedule an EBS backup every ten minutes. Lifecycle Manager only allows you to go down to an hour. Same story for AWS Backup, but it gives you the the ability to write a cron expression. So I put in a cron for every ten minutes that looks like this:
```
cron(* 0/10 * * * *)
```
But I receive the following error:
"Support for specifying both a day-of-week AND a day-of-month parameter is not implemented."
I'm far from a cron job guru but it looks like I can't have an asterisk for both the **day of week** and **day of month** parameter in the cron job, but I'm not sure how else to set it to every ten minutes without specifying both. Is that a problem with my cron expression or just a lack of capability on AWS Backup?
I have deployed few controls using Backup Audit Manager to check the compliance of the backups but most of them have control status of insufficient data. Why is that? I only have one control passing compliant status.
I checked configuration recorder status, it is recording
I have the following rule which clearly is Daily backup at least once and least retention of 7 days. Still the rule control is failing.
```
{
"ruleName": "daily_backup_rule",
"scheduleExpression": "cron(0 21 ? * * *)",
"startWindowMinutes": 60,
"completionWindowMinutes": 480,
"lifecycle": {
"toDeletedAfterDays": 8
}
}
```

I have a Aurora db cluster with a writer and a reader instances. The other day, I got notifications from the alarm of CloudWatch to monitor the status of Aurora reader instance. It said "Recovery of the DB instance has started. Recovery time will vary with the amount of data to be recovered". I checked access logs but found no accesses to DB. Also I found no anomaly detections in DB instance. In this case, it happnes only in a reader instance and is different from fail over in Multi AZ. I beleive Recovery of DB instance always works manually but does it run automatically? Please advise.
Can't deployed vm that I downloaded from getway backup to install and bagin Synchroniz my data
I am trying to create an RDS instance from a backup from AWS console. But it seems the functionality is broken.
I see this error in the network error log.
```
<ErrorResponse xmlns="http://rds.amazonaws.com/doc/2014-10-31/">
<Error>
<Type>Sender</Type>
<Code>InvalidParameterCombination</Code>
<Message>Cannot find version 10.0.17 for aurora-mysql</Message>
</Error>
<RequestId>12ff6fc8-f796-4215-8826-719174b9c358</RequestId>
</ErrorResponse>
```
AWS Backups : When adding an EC2 resource, do I have to add the corresponding EBS volume as well ? Is it enough to check EC2 resource only ?
Hi, I'm very new to backups and all aws.
I can't figure out how to upload the files to the cloud
And I read the information posted on the site.
Every backup software has an option to search for files to upload and here I do not find
I know that the snapshot ids are unique for a resource, but I'm wondering how far that uniqueness goes. Is it just for that resource, an account, regional, global throughout AWS or something else?