Serverless deploy with s3 error: InvalidAccessKeyId: The AWS Access Key Id you provided does not exist in our records.

0

I have an application where I need to use s3 service. When I test credentials created manually in my aws account I can access to this services, but when I try to use it from serverless, I am having this issue: InvalidAccessKeyId: The AWS Access Key Id you provided does not exist in our records.

**serverless.yaml **

org: juan
app: knot
service: knot-be
provider:
  environment:
    APP_ENV: staging
    ...... env variables.....
  name: aws
  runtime: nodejs18.x
  region: us-east-1
  apiGateway:
    binaryMediaTypes:
      - "*/*"
  iamRoleStatements:
    - Effect: "Allow"
      Action:
        - "s3:*"
      Resource: "arn:aws:s3:::knot-be/*"
    - Effect: "Allow"
      Action:
        - "ses:SendEmail"
        - "ses:SendRawEmail"
      Resource: "*"
    - Effect: "Allow"
      Action:
        - "rds:*"
      Resource: "*"
  ecr:
    images:
      appimage:
        path: ./
functions:
  api:
    image:
      name: appimage
      command: 
        - dist/lambda.handler
      entryPoint:
        - '/lambda-entrypoint.sh'
    timeout: 30    
    events:
      - http:
          path: /{proxy+}
          method: any
      - http:
          path: /
          method: any
resources:
  Resources:
    Bloodknot3Bucket:
      Type: AWS::S3::Bucket
      Properties:
        BucketName: knot-be

plugins:
  - serverless-offline

**this is my s3.config.ts **

const s3Config = {
  bucketName: process.env.AWS_S3_BUCKET_NAME || 'storage-dev',
  region: (process.env.AWS_REGION || 'us-east-2') as BucketLocationConstraint,
  credentials: {
    accessKeyId: process.env.AWS_ACCESS_KEY_ID || '',
    secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY || '',
  },
}

const s3Client = new S3Client(s3Config)


export {
  s3Config,
  GetObjectCommand,
  PutObjectCommand,
  DeleteObjectCommand,
  ListObjectsV2Command,
  S3Client,
  getSignedUrl,
}

export default s3Client

and I created an endpoint to check if I have available AWS variables and ofcourse I have them in a response from the deploy server:

{
    "bucketName": "knot-be",
    "region": "us-east-1",
    "credentials": {
        "accessKeyId": "ASIA.....",
        "secretAccessKey": "qG....."
    }
}

But I'm still having the same issue:

[Nest] 5538  - 05/07/2024, 1:00:21 AM   ERROR [S3StorageProvider] InvalidAccessKeyId: The AWS Access Key Id you provided does not exist in our records.
[Nest] 5538  - 05/07/2024, 1:00:21 AM   ERROR [ExceptionsHandler] Error uploading file at 6b613615-af7b-4408-8566-89e9a5f08872
Error: Error uploading file at 6b613615-af7b-4408-8566-89e9a5f08872
    at S3StorageProvider.uploadFile (/home/seb/Desktop/knotblood/bloodknot-backend/src/commons/providers/file-storage/s3/s3-storage.provider.ts:125:13)
    at processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async Promise.all (index 0)

Why when I use my own aws variables it works on local and if I use serverless variables I'm still getting this error even if I use them in my local too. Is there something I'm doing wrong with my serverless file or maybe with some additional configuration I have to do in AWS to use this with serverless?

4 Answers
2

Hlo,

**Check Environment Variables: **Make sure that you're setting the AWS access key ID and secret access key correctly as environment variables in your Serverless configuration.

Verify IAM Permissions: Ensure that the IAM role associated with your Lambda function has the necessary permissions to access S3. Your IAM role should have a policy attached that grants permissions for S3 operations.

AWS Region: Double-check that the AWS region specified in your Lambda function's configuration matches the region where your S3 bucket is located.

serverless.yaml

service: knot-be

provider:
  name: aws
  runtime: nodejs18.x
  region: us-east-1
  environment:
    AWS_S3_BUCKET_NAME: knot-be
    AWS_REGION: us-east-1
    AWS_ACCESS_KEY_ID: YOUR_ACCESS_KEY_ID
    AWS_SECRET_ACCESS_KEY: YOUR_SECRET_ACCESS_KEY
  iamRoleStatements:
    - Effect: "Allow"
      Action:
        - "s3:*"
      Resource: "*"

functions:
  api:
    # Your function configuration

resources:
  Resources:
    Bloodknot3Bucket:
      Type: AWS::S3::Bucket
      Properties:
        BucketName: knot-be

With this configuration, Serverless will inject the AWS credentials into your Lambda function's environment, allowing it to access S3. Also, the IAM role associated with your Lambda function will have the necessary permissions to perform S3 operations

Thank you

answered 23 days ago
1

I mean, I Want to start using s3 build up from serverless and do not use my own variables.

answered 23 days ago
  • Ok, Access Secret in Code: In your serverless function code (e.g., Node.js), use the AWS SDK to retrieve the secret value and use it to configure your S3 client:

    JavaScript: const AWS = require('aws-sdk');

    async function getSecret() { const secretsManager = new AWS.SecretsManager({ region: process.env.AWS_REGION }); const getSecretValueInput = { SecretId: 'your-secret-name-12345', // Replace with your secret's name };

    try { const data = await secretsManager.getSecretValue(getSecretValueInput).promise(); return JSON.parse(data.SecretString); } catch (error) { console.error('Error retrieving secret:', error); throw error; } }

    async function useS3() { const secrets = await getSecret(); const s3Client = new AWS.S3({ accessKeyId: secrets.accessKeyId, secretAccessKey: secrets.secretAccessKey, region: process.env.AWS_REGION, }); // Use the S3 client to interact with S3 buckets }

    => Create IAM Role for Lambda: Create an IAM role specifically for your Lambda function. =>Attach S3 Policy: Attach an IAM policy to the role that grants the necessary permissions for S3 access (e.g., AmazonS3FullAccess). =>Update serverless.yaml: In your serverless.yaml file, reference the IAM role you created:

    YAML: iamRole: !Sub "arn:aws:iam::${aws:accountId}:role/your-lambda-role-name" # Replace with your role's ARN

    Ensure your IAM role or Lambda function has the minimum necessary permissions for S3 access.

1

=>It seems like the issue might be related to how your serverless framework is handling AWS credentials. Here are a few steps you can take to troubleshoot and resolve the issue:

=>Verify AWS Credentials: Double-check that the AWS Access Key ID and Secret Access Key you are using in your serverless configuration are correct. You can verify this by checking the AWS Management Console under IAM (Identity and Access Management) for the user whose credentials you are using.

=>Ensure Environment Variables: Confirm that the environment variables (AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY) are correctly set in your serverless environment. You mentioned that you have verified these variables using an endpoint, but it's always good to double-check.

=>IAM Role Permissions: Ensure that the IAM role associated with your Lambda function has the necessary permissions to access S3. In your serverless.yaml file, you've defined IAM role statements granting S3 permissions, but it's worth checking if the permissions are applied correctly.

=>Check AWS Profile: If you're using AWS profiles for authentication locally (defined in ~/.aws/credentials), ensure that the correct profile is being used both locally and in the serverless environment.

=>Use AWS SDK for Authentication: Instead of manually providing AWS credentials in your s3.config.ts file, consider using the AWS SDK for Node.js to automatically load credentials from the default locations (environment variables, shared credentials file, etc.). You can achieve this by removing the credentials section from your s3Config object and initializing the S3 client without explicitly providing credentials.

=>CloudFormation Stack Update: If you've made changes to your IAM role permissions or any other AWS-related configurations in your serverless.yaml file, try deploying the changes again and ensuring that the CloudFormation stack updates successfully. After performing these checks and adjustments, attempt to redeploy your serverless application and see if the issue persists. If the problem persists, double-check the error logs and AWS CloudTrail logs for more detailed information on what might be causing the credential validation error.

=>Here's an example of how you can modify your s3.config.ts file to use the AWS SDK for authentication: import { S3Client } from '@aws-sdk/client-s3';

const s3Config = { bucketName: process.env.AWS_S3_BUCKET_NAME || 'bloodknot-storage-dev', region: process.env.AWS_REGION || 'us-east-2', };

const s3Client = new S3Client({ region: s3Config.region });

export { s3Config, s3Client };

answered 23 days ago
0

Hi Sebastian,

This Explanation will give you an idea & Solution on Your issue. It maybe Issue with How you're storing your AWS credentials in your serverless.yaml file.

Use AWS Secrets Manager:

  1. Store your access key and secret access key securely in AWS Secrets Manager.
  2. In your serverless.yaml, reference the secret using the arn of the secret instead of environment variables.

Here's a Basic Example:

YAML

provider:

...

iamRoleStatements:

- ...  # Existing IAM role statements

- Effect: Allow

  Action:
    - secretsmanager:GetSecretValue
  Resource: !Sub "arn:aws:secretsmanager:${aws:region}:${aws:accountId}:secret:your-secret-name-12345" 

** Replace with your secret's ARN**

Fetch Credentials at Runtime (if applicable):

If your Lambda function can securely fetch credentials at runtime using IAM roles or a tailored solution, avoid storing credentials altogether.

Here's the AWS Secrets Manager documentation to guide you: https://docs.aws.amazon.com/secretsmanager/

answered 23 days ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions