- Newest
- Most votes
- Most comments
Today, the solution that you're using is the best way to do what you need. S3 is not the only storage solution you could use but it is the one that I'd recommend in any case.
If you are chatting with your local AWS Solutions Architect, mention to them that larger payloads for Step Functions would be handy - we are always looking for feedback about our services.
That said: What would happen if we raised the limit to (say) 512kB? When you start to exceed that you might want the limit raised again. At which point do we say "no" such that you need to rearchitect? There will always be a hard limit somewhere and limits are generally set so as to protect you, the service and other customers while offering an experience which is performant and as cost effective as possible.
As above - you're doing what we recommend so continue to do that.
The Step Functions manual recommends using Amazon S3 when payloads exceed 256 KB, but it doesn’t provide any transport or seamless mechanism for that. Ideally, all integrations -- such as AWS service calls that can produce large outputs -- should automatically store results in S3 and return the S3 URL in the JSON response. For example, the CloudFormation GetTemplate API can generate large payloads, so it would make sense for Step Functions to handle this seamlessly by saving the output to S3. Why is it that only Distributed Map supports reading from and writing to S3? And why doesn’t the Step Functions workflow itself offer such wrappers for all inter-step payload handling?
Relevant content
- asked 2 years ago

Struggling with a similar problem here. Also this seems to be along the same lines.
The HTTPS Task integration is a great feature in StepFunctions, but it seems that in 2025 as API designers don't think much of bandwidth or storage, 256k may be a too harsh limit.
As for where to draw the line, it's true that no matter how large the limit, there will always be someone who needs more.
However, specifically for cases like mine where the http payload is not processed in StepFunctions, I would suggest extending the HTTP Task integration so that the HTTP payload (
bodysection of request/response) be stored in specified S3 object(s) while the inter-state payload would contain only the headers and metadata. It makes sense that there would be a limit on the amount of data processed this way as well, but it would be decoupled from the general state I/O limit, so it can be set significantly higher.