Skip to content

AWS Control Tower Log Archive Bucket Replication: A Secure Alternative to Policy Modifications

12 minute read
Content level: Intermediate
0

AWS Control Tower blocks direct modifications to the log archive bucket policy to protect audit logs. While you can bypass this using the AWSControlTowerExecution role, this risks drift detection, policy reversion during updates, and compliance violations. This article shows how to configure S3 bucket replication—copying logs to a destination bucket you control—for both same-account and cross-account scenarios without compromising Control Tower's governance.

Introduction: Why This Matters

AWS Control Tower's preventive control (implemented via Service Control Policy) blocks direct modifications to the log archive bucket policy by design. This is a fundamental security measure to maintain audit trail integrity and prevent unauthorized access to compliance-critical logs. Many administrators consider assuming the AWSControlTowerExecution role from the management account to modify the bucket policy directly. While this technically works due to an SCP exception for this role, it's strongly discouraged for several reasons:

  • Drift Detection: Manual changes to Control Tower-managed resources trigger drift alerts and compliance issues
  • Update Risks: Control Tower updates can revert your manual policy changes without warning, causing unexpected access loss
  • Security Concerns: The AWSControlTowerExecution role has AdministratorAccess permissions—using it to bypass SCPs undermines your governance model
  • Compliance Impact: The preventive control exists specifically to maintain audit log integrity; bypassing it may violate regulatory requirements

Instead of modifying the protected bucket policy, AWS recommends setting up S3 bucket replication to copy logs to a destination bucket where you have complete control. This approach maintains Control Tower's security posture while providing the flexibility you need.

Common Use Cases

S3 bucket replication for Control Tower logs addresses numerous real-world scenarios:

  • SIEM Integration: Forward CloudTrail and AWS Config logs to security information and event management platforms like Splunk, Datadog, or Sumo Logic without granting them direct access to the protected log archive bucket.
  • Third-Party Tool Access: Enable compliance tools, backup solutions, or analytics platforms to access logs through a bucket you control, with custom access policies tailored to each tool's requirements
  • Long-Term Archival: Replicate logs to buckets with Glacier storage classes for cost-effective retention beyond Control Tower's default periods, meeting extended compliance requirements.
  • Cross-Region Disaster Recovery: Maintain disaster recovery copies in different AWS regions for business continuity, ensuring log availability even during regional outages.
  • Regulatory Requirements: Meet data residency or sovereignty requirements by replicating logs to specific regions or accounts that comply with local regulations.
  • Custom Retention Policies: Apply different lifecycle rules on replicated data without affecting the Control Tower-managed bucket, allowing for flexible retention strategies per use case.

Implementation Approaches

Same-Account Replication (Log Archive Account)

This is the simplest approach where both source and destination buckets reside in the Log Archive account. It's ideal for SIEM ingestion, custom retention policies, or storage class optimization.

Prerequisites:

  • Access to the Log Archive account
  • Permissions to create S3 buckets and IAM roles

Step 1: Create the Destination Bucket

  1. Create a new S3 bucket in the Log Archive account with versioning enabled (required for replication):
  2. Navigate to the S3 console in the Log Archive account
  3. Click Create bucket
  4. Enter a unique bucket name (e.g., ct-logs-replica-)
  5. Choose your desired region
  6. Under Bucket Versioning, select Enable
  7. Configure other settings as needed (encryption, logging, etc.)
  8. Click Create bucket

Step 2: Create the Replication IAM Role

  1. Create an IAM role that S3 will assume to perform replication:
  2. Navigate to the IAM console
  3. Click Roles → Create role
  4. Select AWS service as the trusted entity type
  5. Choose S3 from the service list
  6. Click Next
  7. Skip attaching policies for now (we'll add an inline policy)
  8. Name the role (e.g., CT-LogArchive-Replication-Role)
  9. Click Create role

Step 3: Add Trust Policy

Edit the role's trust policy to allow S3 to assume it:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Service": "s3.amazonaws.com"
            },
            "Action": "sts:AssumeRole"
        }
    ]
}

Step 4: Add Permissions Policy

Add an inline policy to the role with the following permissions (replace and with your actual bucket names):

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:GetReplicationConfiguration",
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::<SOURCE BUCKET NAME>"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:GetObjectVersionForReplication",
                "s3:GetObjectVersionAcl",
                "s3:GetObjectVersionTagging"
            ],
            "Resource": [
                "arn:aws:s3:::<SOURCE BUCKET NAME>/*"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:ReplicateObject",
                "s3:ReplicateDelete",
                "s3:ReplicateTags"
            ],
            "Resource": "arn:aws:s3:::<DESTINATION BUCKET NAME>/*"
        }
    ]
}

Step 5: Configure Replication Rule

  1. Follow the AWS documentation to set up the replication rule: Configuring replication for buckets in the same account
  2. Navigate to the source bucket (Control Tower log archive bucket)
  3. Go to the Management tab
  4. Scroll to Replication rules and click Create replication rule
  5. Enter a rule name
  6. Choose Apply to all objects in the bucket (or specify a prefix if needed)
  7. Select the destination bucket you created
  8. Choose the IAM role you created
  9. Configure additional options as needed (storage class, encryption, etc.)
  10. Click Save

Cross-Account Replication

Cross-account replication is ideal when you need to provide access to logs in a separate AWS account, such as a security tools account, backup account, or third-party service provider account.

Prerequisites:

  • Access to both the Log Archive account (source) and the destination account
  • Permissions to create S3 buckets, bucket policies, and IAM roles in both accounts

Step 1: Create Destination Bucket (Destination Account)

In the destination account:

  1. Create an S3 bucket with versioning enabled (same process as same-account replication)
  2. Note the bucket name and account ID

Step 2: Add Bucket Policy (Destination Account)

For cross account replication, a bucket policy is required on the destination bucket. Add a bucket policy to the destination bucket that grants the source account permissions to replicate objects:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AllowSourceAccountReplication",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::<SOURCE-ACCOUNT-ID>:root"
            },
            "Action": [
                "s3:ReplicateObject",
                "s3:ReplicateDelete",
                "s3:ReplicateTags",
                "s3:GetObjectVersionTagging",
                "s3:ObjectOwnerOverrideToBucketOwner"
            ],
            "Resource": "arn:aws:s3:::<DESTINATION-BUCKET-NAME>/*"
        },
        {
            "Sid": "AllowSourceAccountListBucket",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::<SOURCE-ACCOUNT-ID>:root"
            },
            "Action": [
                "s3:List*",
                "s3:GetBucketVersioning",
                "s3:PutBucketVersioning"
            ],
            "Resource": "arn:aws:s3:::<DESTINATION-BUCKET-NAME>"
        }
    ]
}

Step 3: Create Replication Role (Source Account)

In the Log Archive account, create an IAM role with the same trust policy and permissions as the same-account scenario, but ensure the destination bucket ARN includes the correct account ID.

Step 4: Configure Replication Rule (Source Account)

Follow the AWS documentation for cross-account replication: Configuring replication for buckets in different accounts

The process is similar to same-account replication, but you'll specify the destination bucket in a different account.

Important Consideration - Replica Ownership:

By default, replicated objects are owned by the source account. For cross-account scenarios, you typically want to change ownership to the destination account owner. Enable Replica ownership override in the replication rule configuration to transfer ownership to the destination bucket owner.

KMS Encryption Considerations

If your Control Tower log archive bucket uses AWS KMS encryption (SSE-KMS), additional configuration is required.

Understanding the Encryption Flow:

  • S3 replication must decrypt objects from the source bucket using the source KMS key
  • Then encrypt objects in the destination bucket using the destination KMS key (or SSE-S3)

Additional IAM Permissions Required:

Add these permissions to your replication role:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "kms:Decrypt"
            ],
            "Resource": "arn:aws:kms:<region>:<source-account-id>:key/<source-kms-key-id>",
            "Condition": {
                "StringLike": {
                    "kms:ViaService": "s3.<region>.amazonaws.com",
                    "kms:EncryptionContext:aws:s3:arn": [
                        "arn:aws:s3:::<SOURCE-BUCKET-NAME>/*"
                    ]
                }
            }
        },
        {
            "Effect": "Allow",
            "Action": [
                "kms:Encrypt",
                "kms:GenerateDataKey"
            ],
            "Resource": "arn:aws:kms:<region>:<destination-account-id>:key/<destination-kms-key-id>",
            "Condition": {
                "StringLike": {
                    "kms:ViaService": "s3.<region>.amazonaws.com",
                    "kms:EncryptionContext:aws:s3:arn": [
                        "arn:aws:s3:::<DESTINATION-BUCKET-NAME>/*"
                    ]
                }
            }
        }
    ]
}

KMS Key Policy Updates:

Both the source and destination KMS keys need policy updates to allow the replication role:

Source KMS Key Policy:

{
    "Sid": "Allow replication role to decrypt",
    "Effect": "Allow",
    "Principal": {
        "AWS": "arn:aws:iam::<source-account-id>:role/<replication-role-name>"
    },
    "Action": "kms:Decrypt",
    "Resource": "*"
}

Destination KMS Key Policy (if using KMS):

{
    "Sid": "Allow replication role to encrypt",
    "Effect": "Allow",
    "Principal": {
        "AWS": "arn:aws:iam::<source-account-id>:role/<replication-role-name>"
    },
    "Action": [
        "kms:Encrypt",
        "kms:GenerateDataKey"
    ],
    "Resource": "*"
}

Recommendation: If your use case doesn't require KMS encryption on the destination bucket, consider using SSE-S3 instead. This simplifies configuration, reduces costs (no KMS API charges), and still provides encryption at rest.

For detailed guidance, see: Replicating encrypted objects (SSE-KMS)

Key Considerations

Understanding the operational and financial implications of S3 bucket replication helps you design a solution that balances security, compliance, and cost-effectiveness. The replication approach introduces duplicate storage and data transfer costs, but these are typically offset by the operational benefits and risk mitigation it provides compared to bypassing Control Tower's security controls.

Cost Management

When implementing replication, you'll incur charges across several dimensions. Data transfer costs vary by region—same-region replication avoids inter-region transfer fees, while cross-region replication adds per-GB charges that differ based on source and destination regions. Storage costs double since you're maintaining copies in both buckets, though you can mitigate this through intelligent lifecycle policies that transition older logs to cheaper storage classes like Glacier. Each replicated object generates PUT request charges, and if you're using KMS encryption, every decrypt and encrypt operation adds to your AWS KMS bill.

To control costs effectively, consider using prefix filters in your replication rules to replicate only the logs you actually need—for example, replicating only CloudTrail logs while excluding AWS Config data if your use case doesn't require it. Implement lifecycle policies on the destination bucket to automatically transition objects to lower-cost storage tiers after a defined period. When possible, choose same-region replication to eliminate cross-region data transfer charges, and evaluate whether SSE-S3 encryption on the destination bucket meets your security requirements, as it eliminates KMS API costs while still providing encryption at rest.

Replication Behavior and Operations

S3 replication operates asynchronously, typically completing within 15 minutes though timing can vary based on object size and AWS service load. By default, only objects created after you enable replication are copied to the destination—existing objects require S3 Batch Replication to backfill. Delete markers aren't replicated by default, though you can configure this behavior in your replication rule. The service preserves object metadata and tags during replication, ensuring consistency between source and destination. If you're using S3 Object Lock for immutability, you'll need special configuration to replicate locked objects properly.

Monitoring your replication health is straightforward using CloudWatch metrics like BytesPendingReplication and ReplicationLatency. Set up EventBridge rules to alert you when replication failures occur, and regularly check replication status through the S3 console or APIs. Document your replication configuration thoroughly for your team, including the business justification and any specific prefix filters or storage class selections you've made. If you're using replication for disaster recovery purposes, test your failover scenarios periodically to ensure logs are accessible when needed. Apply lifecycle policies to manage long-term costs, and remember that unlike manual bucket policy changes, replication configurations survive Control Tower updates without creating drift.

Security Best Practices

Security remains paramount even when working with replicated data. Apply least-privilege IAM policies to your replication role, granting only the specific permissions required for replication operations. Bucket versioning must be enabled on the destination bucket—it's a replication requirement that also provides additional data protection. If you're accessing S3 from within a VPC, use VPC endpoints to keep traffic on the AWS network. Enable CloudTrail logging on the destination bucket to maintain a complete audit trail of who accesses your replicated logs.

For enhanced data protection, consider implementing S3 Object Lock on the destination bucket to make logs immutable for a defined retention period—this is particularly valuable for compliance requirements. Regularly audit access to the destination bucket using AWS IAM Access Analyzer and S3 Access Analyzer to identify any overly permissive policies. Implement bucket policies that explicitly restrict access to authorized principals only, following the principle of least privilege. For sensitive environments, enable MFA Delete on the destination bucket to require multi-factor authentication before anyone can delete object versions or disable versioning, adding an extra layer of protection against accidental or malicious data loss.

Conclusion

S3 bucket replication represents the AWS-recommended approach for providing access to Control Tower log archive data without compromising the security governance that Control Tower was designed to enforce. This solution maintains Control Tower's security posture without creating drift or triggering compliance alerts, while providing the flexibility needed for multiple use cases including SIEM integration, compliance tools, disaster recovery, and third-party access. The approach scales automatically with your organization's growth and log volume, requires minimal ongoing maintenance once configured, and preserves audit trail integrity while providing necessary access.

To get started, identify your specific use case—whether it's SIEM integration, compliance requirements, disaster recovery, or third-party tool access. Choose between same-account or cross-account replication based on your organizational structure and security requirements. Follow the implementation steps outlined in this article, test the replication thoroughly, and verify logs are flowing to your destination bucket as expected. Finally, configure monitoring and alerts for replication health to ensure ongoing operational visibility.

For additional questions or to share your implementation experience, feel free to engage in the comments below!

References: