- Newest
- Most votes
- Most comments
Today, the solution that you're using is the best way to do what you need. S3 is not the only storage solution you could use but it is the one that I'd recommend in any case.
If you are chatting with your local AWS Solutions Architect, mention to them that larger payloads for Step Functions would be handy - we are always looking for feedback about our services.
That said: What would happen if we raised the limit to (say) 512kB? When you start to exceed that you might want the limit raised again. At which point do we say "no" such that you need to rearchitect? There will always be a hard limit somewhere and limits are generally set so as to protect you, the service and other customers while offering an experience which is performant and as cost effective as possible.
As above - you're doing what we recommend so continue to do that.
Relevant content
- asked 10 months ago
Struggling with a similar problem here. Also this seems to be along the same lines.
The HTTPS Task integration is a great feature in StepFunctions, but it seems that in 2025 as API designers don't think much of bandwidth or storage, 256k may be a too harsh limit.
As for where to draw the line, it's true that no matter how large the limit, there will always be someone who needs more.
However, specifically for cases like mine where the http payload is not processed in StepFunctions, I would suggest extending the HTTP Task integration so that the HTTP payload (
body
section of request/response) be stored in specified S3 object(s) while the inter-state payload would contain only the headers and metadata. It makes sense that there would be a limit on the amount of data processed this way as well, but it would be decoupled from the general state I/O limit, so it can be set significantly higher.