¿Cómo utilizo los recursos personalizados con los buckets de Amazon S3 en CloudFormation?

6 minutos de lectura
0

Quiero usar recursos personalizados con los buckets de Amazon Simple Storage Service (Amazon S3) en AWS CloudFormation.

Descripción corta

Utilice la plantilla de CloudFormation de la siguiente resolución para usar recursos personalizados con un bucket de S3.

Tenga en cuenta lo siguiente:

**Nota:**En la siguiente resolución, todo el contenido del bucket de S3 se elimina cuando se elimina la pila de CloudFormation. Puede cambiar este comportamiento modificando el código de Lambda.

Resolución

Nota: Si recibe errores al ejecutar los comandos de la Interfaz de la línea de comandos de AWS (AWS CLI), asegúrese de que está utilizando la versión más reciente de la AWS CLI.

Obtenga la plantilla de CloudFormation

Para crear carpetas en un bucket de S3 mediante CloudFormation, guarde la siguiente plantilla JSON o YAML:

JSON:

{
  "AWSTemplateFormatVersion": "2010-09-09",
  "Description": "Working with custom resources and S3",
  "Parameters": {
    "S3BucketName": {
      "Type": "String",
      "Description": "S3 bucket to create.",
      "AllowedPattern": "[a-zA-Z][a-zA-Z0-9_-]*"
    },
    "DirsToCreate": {
      "Description": "Comma delimited list of directories to create.",
      "Type": "CommaDelimitedList"
    }
  },
  "Resources": {
    "SampleS3Bucket": {
      "Type": "AWS::S3::Bucket",
      "Properties": {
        "BucketName": {"Ref":"S3BucketName"}
      }
    },
    "S3CustomResource": {
      "Type": "Custom::S3CustomResource",
      "DependsOn":"AWSLambdaExecutionRole",
      "Properties": {
        "ServiceToken": {"Fn::GetAtt": ["AWSLambdaFunction","Arn"]},
        "the_bucket": {"Ref":"SampleS3Bucket"},
        "dirs_to_create": {"Ref":"DirsToCreate"}
      }
    },
    "AWSLambdaFunction": {
      "Type": "AWS::Lambda::Function",
      "Properties": {
        "Description": "Work with S3 Buckets!",
        "FunctionName": {"Fn::Sub":"${AWS::StackName}-${AWS::Region}-lambda"},
        "Handler": "index.handler",
        "Role": {"Fn::GetAtt": ["AWSLambdaExecutionRole","Arn"]},
        "Timeout": 360,
        "Runtime": "python3.9",
        "Code": {
          "ZipFile": "import boto3\r\nimport cfnresponse\r\ndef handler(event, context):\r\n    # Init ...\r\n    the_event = event['RequestType']\r\n    print(\"The event is: \", str(the_event))\r\n    response_data = {}\r\n    s_3 = boto3.client('s3')\r\n    # Retrieve parameters\r\n    the_bucket = event['ResourceProperties']['the_bucket']\r\n    dirs_to_create = event['ResourceProperties']['dirs_to_create']\r\n    try:\r\n        if the_event in ('Create', 'Update'):\r\n            print(\"Requested folders: \", str(dirs_to_create))\r\n            for dir_name in dirs_to_create:\r\n                print(\"Creating: \", str(dir_name))\r\n                s_3.put_object(Bucket=the_bucket,\r\n                                Key=(dir_name\r\n                                    + '\/'))\r\n        elif the_event == 'Delete':\r\n            print(\"Deleting S3 content...\")\r\n            b_operator = boto3.resource('s3')\r\n            b_operator.Bucket(str(the_bucket)).objects.all().delete()\r\n        # Everything OK... send the signal back\r\n        print(\"Operation successful!\")\r\n        cfnresponse.send(event,\r\n                        context,\r\n                        cfnresponse.SUCCESS,\r\n                        response_data)\r\n    except Exception as e:\r\n        print(\"Operation failed...\")\r\n        print(str(e))\r\n        response_data['Data'] = str(e)\r\n        cfnresponse.send(event,\r\n                        context,\r\n                        cfnresponse.FAILED,\r\n                        response_data)"
        }
      }
    },
    "AWSLambdaExecutionRole": {
      "Type": "AWS::IAM::Role",
      "Properties": {
        "AssumeRolePolicyDocument": {
          "Statement": [
            {
              "Action": [
                "sts:AssumeRole"
              ],
              "Effect": "Allow",
              "Principal": {
                "Service": [
                  "lambda.amazonaws.com"
                ]
              }
            }
          ],
          "Version": "2012-10-17"
        },
        "Path": "/",
        "Policies": [
          {
            "PolicyDocument": {
              "Statement": [
                {
                  "Action": [
                    "logs:CreateLogGroup",
                    "logs:CreateLogStream",
                    "logs:PutLogEvents"
                  ],
                  "Effect": "Allow",
                  "Resource": "arn:aws:logs:*:*:*"
                }
              ],
              "Version": "2012-10-17"
            },
            "PolicyName": {"Fn::Sub": "${AWS::StackName}-${AWS::Region}-AWSLambda-CW"}
          },
          {
            "PolicyDocument": {
              "Statement": [
                {
                  "Action": [
                    "s3:PutObject",
                    "s3:DeleteObject",
                    "s3:List*"
                  ],
                  "Effect": "Allow",
                  "Resource": [
                    {"Fn::Sub": "arn:aws:s3:::${SampleS3Bucket}/*"},
                    {"Fn::Sub": "arn:aws:s3:::${SampleS3Bucket}"}
                  ]
                }
              ],
              "Version": "2012-10-17"
            },
            "PolicyName": {"Fn::Sub":"${AWS::StackName}-${AWS::Region}-AWSLambda-S3"}
          }
        ],
        "RoleName": {"Fn::Sub":"${AWS::StackName}-${AWS::Region}-AWSLambdaExecutionRole"}
      }
    }
  }
}

YAML:

Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.

Permission is hereby granted, free of charge, to any person obtaining a copy of this
software and associated documentation files (the "Software"), to deal in the Software
without restriction, including without limitation the rights to use, copy, modify,
merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

AWSTemplateFormatVersion: 2010-09-09
Description: Working with custom resources and S3
Parameters:
  S3BucketName:
    Type: String
    Description: "S3 bucket to create."
    AllowedPattern: "[a-zA-Z][a-zA-Z0-9_-]*"
  DirsToCreate:
    Description: "Comma delimited list of directories to create."
    Type: CommaDelimitedList
Resources:
  SampleS3Bucket:
    Type: AWS::S3::Bucket
    Properties:
      BucketName: !Ref S3BucketName
  S3CustomResource:
    Type: Custom::S3CustomResource
    Properties:
      ServiceToken: !GetAtt AWSLambdaFunction.Arn
      the_bucket: !Ref SampleS3Bucket
      dirs_to_create: !Ref DirsToCreate
  AWSLambdaFunction:
     Type: "AWS::Lambda::Function"
     Properties:
       Description: "Work with S3 Buckets!"
       FunctionName: !Sub '${AWS::StackName}-${AWS::Region}-lambda'
       Handler: index.handler
       Role: !GetAtt AWSLambdaExecutionRole.Arn
       Timeout: 360
       Runtime: python3.9
       Code:
         ZipFile: |
          import boto3
          import cfnresponse
          def handler(event, context):
              # Init ...
              the_event = event['RequestType']
              print("The event is: ", str(the_event))
              response_data = {}
              s_3 = boto3.client('s3')
              # Retrieve parameters
              the_bucket = event['ResourceProperties']['the_bucket']
              dirs_to_create = event['ResourceProperties']['dirs_to_create']
              try:
                  if the_event in ('Create', 'Update'):
                      print("Requested folders: ", str(dirs_to_create))
                      for dir_name in dirs_to_create:
                          print("Creating: ", str(dir_name))
                          s_3.put_object(Bucket=the_bucket,
                                         Key=(dir_name
                                              + '/'))
                  elif the_event == 'Delete':
                      print("Deleting S3 content...")
                      b_operator = boto3.resource('s3')
                      b_operator.Bucket(str(the_bucket)).objects.all().delete()
                  # Everything OK... send the signal back
                  print("Operation successful!")
                  cfnresponse.send(event,
                                   context,
                                   cfnresponse.SUCCESS,
                                   response_data)
              except Exception as e:
                  print("Operation failed...")
                  print(str(e))
                  response_data['Data'] = str(e)
                  cfnresponse.send(event,
                                   context,
                                   cfnresponse.FAILED,
                                   response_data)
  AWSLambdaExecutionRole:
     Type: AWS::IAM::Role
     Properties:
       AssumeRolePolicyDocument:
         Statement:
         - Action:
           - sts:AssumeRole
           Effect: Allow
           Principal:
             Service:
             - lambda.amazonaws.com
         Version: '2012-10-17'
       Path: "/"
       Policies:
       - PolicyDocument:
           Statement:
           - Action:
             - logs:CreateLogGroup
             - logs:CreateLogStream
             - logs:PutLogEvents
             Effect: Allow
             Resource: arn:aws:logs:*:*:*
           Version: '2012-10-17'
         PolicyName: !Sub ${AWS::StackName}-${AWS::Region}-AWSLambda-CW
       - PolicyDocument:
           Statement:
           - Action:
             - s3:PutObject
             - s3:DeleteObject
             - s3:List*
             Effect: Allow
             Resource:
             - !Sub arn:aws:s3:::${SampleS3Bucket}/*
             - !Sub arn:aws:s3:::${SampleS3Bucket}
           Version: '2012-10-17'
         PolicyName: !Sub ${AWS::StackName}-${AWS::Region}-AWSLambda-S3
       RoleName: !Sub ${AWS::StackName}-${AWS::Region}-AWSLambdaExecutionRole

Implementación de su plantilla de CloudFormation

Puede implementar su plantilla de CloudFormation mediante la consola de CloudFormation o la AWS CLI.

Consola de CloudFormation:

1.    Abra la consola de CloudFormation.

2.    Elija Crear pila, y, a continuación, elija Con nuevos recursos (estándar).

3.    En la sección Especificar plantilla, elija Cargar un archivo de plantilla.

4.    Elija Elegir archivo, seleccione la plantilla que ha descargado en el paso 1 y, a continuación, elija Siguiente.

5.    En la sección Parámetros, en la opción S3BucketName, elija su bucket de S3.

6.    En la opción DirsToCreate, introduzca una lista delimitada por comas de las carpetas y subcarpetas que desee crear.

Nota: Por ejemplo, puede escribir la lista dir_1,dir_2/sub_dir_2,dir_3.

7.    Siga el resto de los pasos del asistente de configuración y, a continuación, seleccione Crear pila.

AWS CLI:

1.    Asigne a la plantilla descargada el siguiente nombre: custom-resource-lmabda-s3.template

2.    Abra una línea de comandos en el sistema operativo y, a continuación, vaya a la carpeta donde se encuentra la plantilla.

3.    Ejecute el siguiente comando:

aws cloudformation create-stack \
                   --timeout-in-minutes 10 \
                   --disable-rollback \
                   --stack-name testing-custom-resource-s3 \
                   --template-body file://custom-resource-lmabda-s3.template \
                   --capabilities CAPABILITY_NAMED_IAM \
                   --parameters \
                   ParameterKey=DirsToCreate,ParameterValue="dir_1\,dir_2/sub_dir_2\,dir_3" \
                   ParameterKey=S3BucketName,ParameterValue="test-bucket-custom-resource"

**Nota:**Actualice los parámetros con sus valores.


OFICIAL DE AWS
OFICIAL DE AWSActualizada hace un año