Skip to content

S3 lambda upload function working locally , but not working when deployed

1

I have a lambda function that takes a file as input and uploads that file to AWS s3 but for some reason, it does not work when deployed, images when uploaded with the deployed lambda function, it does not upload properly it gives a picture is not viewable, same for any doc or file it when opening it shows it is corrupted and what not and when uploading anything above 1mb it shows ether internal server error or request is too big

this function

`import { S3Client } from "@aws-sdk/client-s3"; import { Upload } from "@aws-sdk/lib-storage"; import * as crypto from "crypto"; import { middyfy } from "@libs/lambda"; import { _200, _400, _500 } from "@libs/Response"; import * as dotenv from "dotenv"; import { Handler } from "aws-lambda"; const multipart = require("aws-lambda-multipart-parser");

dotenv.config();

const region = process.env.AWS_REGION || "it is hidden not empty "; // Initialize AWS S3 client using environment variables for credentials const s3Client = new S3Client({ region: region, });

// Function to generate a random filename const generateRandomFileName = () => { return crypto.randomBytes(8).toString("hex"); };

// Function to upload file data to S3 and return the URL const uploadFileAndGetUrl = async (fileContent, contentType) => { const randomFileName = generateRandomFileName(); const Key = uploads/${randomFileName};

try { const upload = new Upload({ client: s3Client, params: { Bucket: process.env.AWSBucket, Key, Body: fileContent, ContentType: contentType, // ACL: "public-read", }, });

const result = await upload.done();
const fileUrl = `https://${process.env.AWSBucket}.s3.${region}.amazonaws.com/${result.Key}`;
return fileUrl;

} catch (error) { console.error("Error uploading file to S3:", error); throw new Error("File upload failed"); } };

// interface CustomRequestContext { // body: { // 'Content-Type': string; // }; // }

export const uploadFile: Handler = async (event) => { try { const result = multipart.parse(event, true);

// console.log(event);

if (!result.file.contentType || !result.file.content) {
  return _400("Invalid request format.");
}
// console.log(body);

// console.log(JSON.parse(body));
const contentType = result.file.contentType;
const fileContent = result.file.content;

const fileUrl = await uploadFileAndGetUrl(fileContent, contentType);

return _200({ url: fileUrl });

} catch (error) { console.error("Error handling file upload:", error); return _500({ message: "Internal server error.", ...error }); } };

export const main = middyfy(uploadFile);a import { handlerPath } from "@libs/handler-resolver";

export default { handler: ${handlerPath(__dirname)}/handler.main, timeout: 30, memorySize: 512, events: [ { http: { method: "post", path: "/file/upload", cors: true, }, }, ], }; servereless.ts

import type { AWS } from "@serverless/typescript"; import * as functions from "@functions/index"; import DynamoDBResources from "./serverless/DynamodbResources";

const DynamoTableNames = () => { const tableNames: { [key: string]: { Ref: string } } = {}; Object.keys(DynamoDBResources).map((tableName) => { tableNames[tableName] = { Ref: tableName }; }); return tableNames; };

const serverlessConfiguration: AWS = { service: "procurpal", useDotenv: true, frameworkVersion: "3", plugins: ["serverless-esbuild", "serverless-webpack", "serverless-offline"], provider: { name: "aws", runtime: "nodejs20.x", profile: "it is hidden not empty ",

region: "ap-south-1",
iamManagedPolicies: [
  "arn:aws:iam::aws:policy/AmazonDynamoDBFullAccess",
  "arn:aws:iam::aws:policy/AmazonS3FullAccess",
],
apiGateway: {
  minimumCompressionSize: 1024,
  shouldStartNameWithService: true,
},
environment: {
  AWS_NODEJS_CONNECTION_REUSE_ENABLED: "1",
  NODE_OPTIONS: "--enable-source-maps --stack-trace-limit=1000",
  region: "${self:provider.region}",
  ...DynamoTableNames(),
  AWSBucket: "${self:custom.bucketName}",
  BaseURL: "${self:custom.BaseURL}",
},

}, // import the function via paths functions: { ...functions }, package: { individually: true }, custom: { esbuild: { bundle: true, minify: false, sourcemap: true, exclude: ["aws-sdk"], target: "node20", define: { "require.resolve": undefined }, platform: "node", concurrency: 10, }, bucketName: "it is hidden not empty ", webpack: { webpackConfig: "./webpack.config.js", includeModules: true, }, BaseURL: "it is hidden not empty ", WSSBaseURL: "it is hidden not empty ", }, resources: { Resources: { ...DynamoDBResources, }, }, };

module.exports = serverlessConfiguration; `

1 Answer
0

Greeting

Hi Arin,

Thanks for sharing the details of your Lambda function issue! It sounds like you've put significant effort into building a robust solution for uploading files to S3, and it's frustrating when things work locally but break when deployed. Let's dive in and get this sorted. 😊


Clarifying the Issue

From your description, the deployed Lambda function is experiencing the following problems:

  • Files uploaded are corrupted or not viewable, indicating potential issues with how file content is processed and sent to S3.
  • Files larger than 1 MB throw errors like "internal server error" or "request is too big," which may point to payload or API Gateway limitations.

These are common issues when working with Lambda functions and S3 uploads, especially for scenarios involving large files or incorrect handling of binary data. We'll break this down to pinpoint the root cause and implement fixes to ensure your function behaves reliably after deployment.


Key Terms

  • AWS Lambda: A serverless compute service that runs code without provisioning or managing servers.
  • Amazon S3: A scalable storage service designed for high availability and performance.
  • Multipart Upload: A method in S3 to upload large files in smaller parts, enabling efficient handling of large payloads.
  • API Gateway Payload Limit: The maximum payload size for requests to API Gateway (default: 6 MB).
  • Binary Media Types: Media types configured in API Gateway to allow binary content (e.g., images, PDFs).

The Solution (Our Recipe)

Steps at a Glance:

  1. Configure API Gateway to handle binary media types.
  2. Adjust the Lambda function to process binary file content correctly.
  3. Handle large files by switching to presigned URLs or multipart uploads.
  4. Test the deployed function with different file types and sizes.

Step-by-Step Guide:

  1. Configure API Gateway to Handle Binary Media Types
    • In your API Gateway, add the necessary binary media types (e.g., image/jpeg, application/pdf).
    • Navigate to Settings in your API Gateway and add these types under "Binary Media Types."

  1. Adjust the Lambda Function to Process Binary File Content Correctly
    • Use the Buffer module to ensure file content remains intact during processing.
    • Modify your function to handle binary data explicitly:
      const result = multipart.parse(event, true);
      const fileContent = Buffer.from(result.file.content, 'binary');
      const contentType = result.file.contentType;
      const fileUrl = await uploadFileAndGetUrl(fileContent, contentType);

  1. Handle Large Files by Using Presigned URLs or Multipart Uploads
    • For files larger than the payload limit, use presigned URLs to upload directly to S3:
      const { GetObjectCommand, S3Client } = require("@aws-sdk/client-s3");
      const s3 = new S3Client({ region: process.env.AWS_REGION });
      
      const params = {
        Bucket: process.env.AWSBucket,
        Key: `uploads/${filename}`,
      };
      const command = new GetObjectCommand(params);
      const signedUrl = await getSignedUrl(s3, command, { expiresIn: 3600 });
      
      return { url: signedUrl };

  1. Test the Deployed Function with Different File Types and Sizes
    • Deploy your updated function and test it with images, PDFs, and larger files. Monitor CloudWatch logs for any issues.

Closing Thoughts

These steps should address the file corruption and size limitation issues you're facing. Additionally, consider the following AWS documentation to deepen your understanding:

Feel free to reach out if you have any more questions, Arin. You're doing great work! 🚀


Farewell

Best of luck with your Lambda function and S3 uploads. Let me know how it goes! 😊


Cheers,

Aaron 😊

answered 10 months ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.