Skip to content

error "access denied" when creating s3 with terraform

0

Hello, when I apply the plan for this main.tf with my user terraform with administrator full access I have this error : ╷ │ Error: creating S3 Bucket (sylvain-ard-f7add749) ACL: operation error S3: PutBucketAcl, https response error StatusCode: 403, , api error AccessDenied: Access Denied │ │ with aws_s3_bucket_acl.images_bucket_acl, │ on main.tf line 26, in resource "aws_s3_bucket_acl" "images_bucket_acl": │ 26: resource "aws_s3_bucket_acl" "images_bucket_acl" { │ ╵ ╷ │ Error: putting S3 Bucket (sylvain-ard-f7add749) Policy: operation error S3: PutBucketPolicy, https response error StatusCode: 403, , api error AccessDenied: Access Denied │ │ with aws_s3_bucket_policy.images_bucket_policy, │ on main.tf line 43, in resource "aws_s3_bucket_policy" "images_bucket_policy": │ 43: resource "aws_s3_bucket_policy" "images_bucket_policy" { here is my main.tf :

provider "aws" {
  region = "us-east-1"
}

provider "random" {
  # Vous pouvez spécifier la version ici si nécessaire
}

resource "random_id" "bucket_suffix" {
  byte_length = 4
}

resource "aws_key_pair" "deployer" {
  key_name   = var.key_name
  public_key = var.public_key
}

resource "aws_s3_bucket" "images_bucket" {
  bucket = "sylvain-ard-${random_id.bucket_suffix.hex}"

  tags = {
    Name = "images_bucket"
  }
}

resource "aws_s3_bucket_acl" "images_bucket_acl" {
  bucket = aws_s3_bucket.images_bucket.bucket
  acl    = "public-read"
}

resource "aws_s3_bucket_website_configuration" "images_bucket_website" {
  bucket = aws_s3_bucket.images_bucket.bucket

  index_document {
    suffix = "index.html"
  }

  error_document {
    key = "error.html"
  }
}

resource "aws_s3_bucket_policy" "images_bucket_policy" {
  bucket = aws_s3_bucket.images_bucket.id

  policy = <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": "*",
      "Action": "s3:GetObject",
      "Resource": "arn:aws:s3:::${aws_s3_bucket.images_bucket.bucket}/*"
    }
  ]
}
EOF
}

resource "aws_instance" "web_server" {
  ami           = "ami-00beae93a2d981137" 
  instance_type = "t2.micro"
  key_name      = aws_key_pair.deployer.key_name

  user_data = <<-EOF
    #!/bin/bash
    sudo yum update -y
    sudo yum install -y httpd php php-cli php-json php-mbstring git
    sudo systemctl start httpd
    sudo systemctl enable httpd

    # Installer Composer
    php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');"
    php composer-setup.php --install-dir=/usr/local/bin --filename=composer
    php -r "unlink('composer-setup.php');"
  EOF

  tags = {
    Name = "WebServer"
  }

  vpc_security_group_ids = [aws_security_group.web_sg.id]
}

resource "aws_security_group" "web_sg" {
  name        = "web_sg"
  description = "Allow HTTP and SSH traffic"

  ingress {
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  ingress {
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
}

output "bucket_name" {
  value = aws_s3_bucket.images_bucket.bucket
}

output "web_server_public_ip" {
  value = aws_instance.web_server.public_ip
}

Thank you for helping me Best regards

asked 2 years ago1.1K views
5 Answers
1

The most likely reason is that you have Block Public Access (BPA) configured on your AWS account and/or the bucket. Regardless of permissions, that will prevent you from setting a bucket policy that permits public access, as you're trying to do, or configuring an ACL that permits public access on an ACL-enabled bucket. The account-level settings are explained in detail here: https://docs.aws.amazon.com/AmazonS3/latest/userguide/configuring-block-public-access-account.html

The general advice is not to make buckets public or to use S3's static website hosting feature, but instead, to use CloudFront to publish your site and have CloudFront use Origin Access Control (OAC) for authenticated access to your S3 bucket. Your end users can access the site without any authentication by connecting to the CloudFront distribution rather than directly accessing your bucket. This documentation article walks through setting that up: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/GettingStarted.SimpleDistribution.html

EXPERT
answered 2 years ago
EXPERT
reviewed 2 years ago
EXPERT
reviewed 2 years ago
0

OK thank you can we automatize the cloudfront and s3 creation and parametrization in a terraform script ?

answered 2 years ago
  • Yes, certainly. CloudFront is fully supported also by Terraform.

0

I have done the script below, it ran without errors, have you remarks about it ?

terraform {
  required_providers {
    aws = {
      source = "hashicorp/aws"
      version = ">= 4.0.0"
    }
    random = {
      source = "hashicorp/random"
      version = ">= 3.1.0"
    }
  }
}

provider "aws" {
  region = "us-east-1"
}

provider "random" {
  # Configuration du fournisseur random sans contrainte de version
}

resource "random_id" "bucket_suffix" {
  byte_length = 4
}

resource "aws_key_pair" "deployer" {
  key_name   = var.key_name
  public_key = var.public_key
}

resource "aws_s3_bucket" "images_bucket" {
  bucket = "sylvain-ard-${random_id.bucket_suffix.hex}"

  tags = {
    Name = "images_bucket"
  }
}

resource "aws_s3_bucket_website_configuration" "images_bucket_website" {
  bucket = aws_s3_bucket.images_bucket.bucket

  index_document {
    suffix = "index.html"
  }

  error_document {
    key = "error.html"
  }
}

resource "aws_s3_bucket_policy" "images_bucket_policy" {
  bucket = aws_s3_bucket.images_bucket.id

  policy = <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "${aws_cloudfront_origin_access_identity.oai.iam_arn}"
      },
      "Action": "s3:GetObject",
      "Resource": "arn:aws:s3:::${aws_s3_bucket.images_bucket.bucket}/*"
    }
  ]
}
EOF
}

resource "aws_cloudfront_origin_access_identity" "oai" {
  comment = "OAI for S3 bucket"
}

resource "aws_cloudfront_distribution" "s3_distribution" {
  origin {
    domain_name = "${aws_s3_bucket.images_bucket.bucket}.s3.amazonaws.com"
    origin_id   = "S3-${aws_s3_bucket.images_bucket.bucket}"

    s3_origin_config {
      origin_access_identity = aws_cloudfront_origin_access_identity.oai.cloudfront_access_identity_path
    }
  }

  enabled             = true
  is_ipv6_enabled     = true
  comment             = "S3 distribution"
  default_root_object = "index.html"

  default_cache_behavior {
    allowed_methods  = ["GET", "HEAD"]
    cached_methods   = ["GET", "HEAD"]
    target_origin_id = "S3-${aws_s3_bucket.images_bucket.bucket}"

    forwarded_values {
      query_string = false

      cookies {
        forward = "none"
      }
    }

    viewer_protocol_policy = "redirect-to-https"
    min_ttl                = 0
    default_ttl            = 3600
    max_ttl                = 86400
  }

  viewer_certificate {
    cloudfront_default_certificate = true
  }

  restrictions {
    geo_restriction {
      restriction_type = "none"
    }
  }

  tags = {
    Name = "S3 distribution"
  }
}

resource "aws_instance" "web_server" {
  ami           = "ami-00beae93a2d981137"
  instance_type = "t2.micro"
  key_name      = aws_key_pair.deployer.key_name

  user_data = <<-EOF
    #!/bin/bash
    sudo yum update -y
    sudo yum install -y httpd php php-cli php-json php-mbstring git
    sudo systemctl start httpd
    sudo systemctl enable httpd

    # Installer Composer
    php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');"
    php composer-setup.php --install-dir=/usr/local/bin --filename=composer
    php -r "unlink('composer-setup.php');"
  EOF

  tags = {
    Name = "WebServer"
  }

  vpc_security_group_ids = [aws_security_group.web_sg.id]
}

resource "aws_security_group" "web_sg" {
  name        = "web_sg"
  description = "Allow HTTP and SSH traffic"

  ingress {
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  ingress {
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
}

output "bucket_name" {
  value = aws_s3_bucket.images_bucket.bucket
}

output "cloudfront_domain" {
  value = aws_cloudfront_distribution.s3_distribution.domain_name
}

output "web_server_public_ip" {
  value = aws_instance.web_server.public_ip
}
answered 2 years ago
0

The setting viewer_certificate { cloudfront_default_certificate = true } sets the CloudFront distribution to use CloudFront's default TLS certificate for *.cloudfront.net. Technically that's fine, but if you want your website to appear to users with your own domain name, you'll want to obtain a TLS certificate for your domain from Amazon Certificate Manager (ACM). The TLS certificates issued by ACM are completely free. I'd suggest using ECDSA 256 bit (EC_prime256v1) as the key algorithm for the certificate for the best performance and good security.

Note that you'll need to obtain the certificate from ACM specifically in the us-east-1 region to be able to associate it with a CloudFront distribution. Documentation contains more details for CloudFront: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/cnames-and-https-procedures.html and for ACM: https://docs.aws.amazon.com/acm/latest/userguide/gs.html.

Specifying default_ttl, min_ttl, and max_ttl is supported, but the recommended modern way to control caching is with cache policies. You can find the technical GUIDs for managed cache policies provided by AWS in this documentation article: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/using-managed-cache-policies.html. Usually, CachingOptimized is a good option to start with, but you can also create your own, custom cache policy. When you specify a cache policy, you should not specify default_ttl, min_ttl, or max_ttl.

CloudFront can authenticate to your S3 bucket with Origin Access Identity (OAI) as you've configured it. It's fine. However, the newer and recommended alternative is Origin Access Control (OAC), so you might want to switch to it: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/cloudfront_origin_access_control

The security group of your EC2 instance allows HTTP and SSH access from the entire internet. For HTTP and HTTPS, it would be recommended to point your users to the CloudFront distribution, point the distribution to an Application Load Balancer (ALB) in the same VPC with your EC2 instance, and set the EC2 instance as a target for the ALB. The ALB can also use a certificate from ACM (in the same region with the ALB). You can set the security group of the ALB to allow tcp/443 (HTTPS) from the managed prefix list com.amazonaws.global.cloudfront.origin-facing available in every region. The EC2 instance should allow HTTP or HTTPS from the ALB but not from the public internet. This will allow CloudFront to connect to your EC2 instance through the ALB over HTTPS (or HTTP) but block direct access to the ALB or EC2 instance from the internet over HTTP(S).

If you don't want to use an ALB, you can also set CloudFront to connect directly to your EC2 instance, but for HTTPS, that requires a publicly trusted TLS certificate also on your EC2 instance. The ALB has the benefit of being able to use ACM-issued certificates. To avoid either using an ALB or obtaining a TLS certificate, you can also set CloudFront to connect to your EC2 instance over HTTP, but of course, the traffic would then be unencrypted.

SSH access to your server you should restrict to your own IP address. Permitting access from the entire internet over SSH would immediately make your server a target for countless hackers trying to guess your username/password or trying to find a vulnerability in SSH.

You might also want to add the general guardrails below in your S3 bucket policy. They block access without using HTTPS and at least TLS 1.2. They also block authenticated requests using signature algorithms older than SigV4, which has been the modern version for many, many years. The policy also blocks completely unauthenticated access:

{
	"Sid": "DenyUnencryptedAccess",
	"Effect": "Deny",
	"Principal": "*",
	"Action": "s3:*",
	"Resource": [
		"${aws_s3_bucket.images_bucket.bucket.arn}",
		"${aws_s3_bucket.images_bucket.bucket.arn}/*"
	],
	"Condition": {
		"Bool": {
			"aws:SecureTransport": "false"
		}
	}
},
{
	"Sid": "DenyOutdatedTlsVersions",
	"Effect": "Deny",
	"Principal": "*",
	"Action": "s3:*",
	"Resource": [
		"${aws_s3_bucket.images_bucket.bucket.arn}",
		"${aws_s3_bucket.images_bucket.bucket.arn}/*"
	],
	"Condition": {
		"NumericLessThanIfExists": {
			"s3:TlsVersion": "1.2"
		}
	}
},
{
	"Sid": "DenyOutdatedAuthSignature",
	"Effect": "Deny",
	"Principal": "*",
	"Action": "s3:*",
	"Resource": [
		"${aws_s3_bucket.images_bucket.bucket.arn}",
		"${aws_s3_bucket.images_bucket.bucket.arn}/*"
	],
	"Condition": {
		"StringNotEqualsIfExists": {
			"s3:signatureversion": "AWS4-HMAC-SHA256"
		}
	}
}
EXPERT
answered 2 years ago
EXPERT
reviewed 2 years ago
0

OK I need to recreate my rePost account because it didn't works but answer accepted

answered 2 years ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.