Skip to content

AWS s3 error using new modular package v3 with serverless ( with v2 package no errors)

0

I hope you are having a good day. I am facing an issue with Serverless and the use of S3 services. When I implement version 2 of the AWS SDK ("aws-sdk": "^2.1386.0"), everything works perfectly; I can upload and download images without any problems. However, when I switch to version 3, which is modular ("@aws-sdk/client-s3": "^3.554.0"), nothing works, and I keep getting an "access denied" error. Nothing has changed in the serverless.yml file. Could you help me understand why it works under the same conditions with v2 but not with v3?

I want to migrate my lambda to the use of v3 because v2 is ending support, but why it does not work?

serverless.yml ( use for both versions test)

org: juan
app: knot
service: knot-be
provider:
  environment:
    APP_ENV: staging
    ...... env variables.....
  name: aws
  runtime: nodejs18.x
  region: us-east-1
  apiGateway:
    binaryMediaTypes:
      - "*/*"
  iamRoleStatements:
    - Effect: "Allow"
      Action:
        - "s3:*"
      Resource: "arn:aws:s3:::knot-be/*"
    - Effect: "Allow"
      Action:
        - "ses:SendEmail"
        - "ses:SendRawEmail"
      Resource: "*"
    - Effect: "Allow"
      Action:
        - "rds:*"
      Resource: "*"
  ecr:
    images:
      appimage:
        path: ./
functions:
  api:
    image:
      name: appimage
      command: 
        - dist/lambda.handler
      entryPoint:
        - '/lambda-entrypoint.sh'
    timeout: 30    
    events:
      - http:
          path: /{proxy+}
          method: any
      - http:
          path: /
          method: any
resources:
  Resources:
    Bloodknot3Bucket:
      Type: AWS::S3::Bucket
      Properties:
        BucketName: knot-be

plugins:
  - serverless-offline

implementation with v2 working with serverless ( aws-sdk": "^2.1386.0 )

import { Injectable, Logger } from '@nestjs/common'
import { S3 } from 'aws-sdk'
import {
  AWS_BUCKET_NAME,
  AWS_ACCESS_KEY_ID,
  AWS_REGION,
  AWS_SECRET_KEY,
  AWS_EXPIRATION_TIME,
} from 'src/commons/constants/aws.constants'

@Injectable()
export class S3Provider {
  private _s3: S3
  private readonly logger: Logger
  private readonly folderStorageName: string

  constructor(folderStorageName: string) {
    this.folderStorageName = folderStorageName
    this.logger = new Logger(S3Provider.name)

    this._s3 = new S3({
      secretAccessKey: AWS_SECRET_KEY,
      accessKeyId: AWS_ACCESS_KEY_ID,
      region: AWS_REGION,
    })
  }

Response with v2: ( status:200, 201, 204....etc)

v3 implementation not working with serverless ( @aws-sdk/client-s3": "^3.554.0 )

import { Injectable, Logger } from '@nestjs/common';
import { S3Client, PutObjectCommand, GetObjectCommand } from '@aws-sdk/client-s3';
import {
  AWS_BUCKET_NAME,
  AWS_ACCESS_KEY_ID,
  AWS_REGION,
  AWS_SECRET_KEY,
  AWS_EXPIRATION_TIME,
} from 'src/commons/constants/aws.constants';
import { getSignedUrl } from '@aws-sdk/s3-request-presigner';

@Injectable()
export class S3Provider {
  private _s3: S3Client;
  private readonly logger: Logger;
  private readonly folderStorageName: string;

  constructor(folderStorageName: string) {
    this.folderStorageName = folderStorageName;
    this.logger = new Logger(S3Provider.name);

    this._s3 = new S3Client({
      credentials: {
        accessKeyId: AWS_ACCESS_KEY_ID,
        secretAccessKey: AWS_SECRET_KEY,
      },
      region: AWS_REGION,
    });
  }

Response with v3: status 400

[Nest] 5538  - 05/07/2024, 1:00:21 AM   ERROR [S3StorageProvider] InvalidAccessKeyId: The AWS Access Key Id you provided does not exist in our records.
[Nest] 5538  - 05/07/2024, 1:00:21 AM   ERROR [ExceptionsHandler] Error uploading file at 6b613615-af7b-4408-8566-89e9a5f08872
Error: Error uploading file at 6b613615-af7b-4408-8566-89e9a5f08872
    at S3StorageProvider.uploadFile (/home/seb/Desktop/knotblood/bloodknot-backend/src/commons/providers/file-storage/s3/s3-storage.provider.ts:125:13)
    at processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async Promise.all (index 0)

constants.ts

export const AWS_BUCKET_NAME =  'knot-be'
export const AWS_SECRET_KEY = process.env.AWS_SECRET_KEY
export const AWS_ACCESS_KEY_ID = process.env.AWS_ACCESS_KEY_ID
export const AWS_REGION = process.env.AWS_REGION
2 Answers
0

The error message says that your access key isn't accepted. I don't know why that might be, but I can say that your Lambda function shouldn't have such keys at all. Instead, you should add the necessary permissions to the Lambda execution role used by your function and simply create the S3 client in your code without specifying credentials. That will cause S3 to be accessed with the temporary credentials of the IAM role used as the Lambda execution role.

If I understood wrong and this code isn't running in Lambda but in ECS or an EC2 instance, the same principle still applies: the ECS task execution role or the IAM role attached to the EC2 instance profile used by the EC2 instance would be granted the necessary permissions, and the custom code running on the platform would not specify credentials for accessing S3 or other AWS services.

The AWS SDK will automatically use the temporary credentials available from the platform when keys aren't explicitly defined, whether the platform is a Lambda function, ECS task, or EC2 instance.

EXPERT
answered 2 years ago
  • Can you please explain how this works? I'm new to this. Should I delete these variables:

    AWS_SECRET_KEY = process.env.AWS_SECRET_KEY AWS_ACCESS_KEY_ID = process.env.AWS_ACCESS_KEY_ID

    and will S3 v3 automatically connect?"

0

The lines you quoted only set the variables in your code. Those lines don't matter. You should remove the parameters you're giving to S3Client that specify those variables. That is causing the static username/password combination that AWS_SECRET_KEY and AWS_ACCESS_KEY_ID effectively are to be used to authenticate to AWS to access the S3 bucket. When you don't specify any credentials, the AWS SDK that you're using will transparently and automatically obtain temporary credentials from the platform that you're running on, such as Lambda.

However, before you delete the credentials from your code, you will need to grant the permissions to S3 to the IAM role that your Lambda function is using as its execution role. You can see the role identifier in the configuration of your Lambda function. You can either add the permissions to the policies attached to the IAM role, or you can grant the permissions in the bucket policy of your S3 bucket.

If you aren't familiar with the terms I'm using, perhaps there's someone else you're working with who has set up the current Lambda execution role and the S3 bucket who would be able to configure what I'm explaining?

EXPERT
answered 2 years ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.