- Newest
- Most votes
- Most comments
Great question ,it seems counterintuitive at first, but this is a common observation.
Key Point: S3 Gateway Endpoints prioritize security and cost efficiency over raw speed, especially for large file transfers, while Internet Gateway-based access sometimes benefits from AWS internal optimization layers like Global Accelerator or edge caching.
Why the VPC Gateway Endpoint Might Be Slower: No Global Accelerator Path: a) When using public access via Internet Gateway, S3 traffic might benefit from AWS’s edge-optimized network paths (like Global Accelerator), resulting in faster transit. b).Gateway endpoints use internal regional networking which can sometimes have less aggressive optimizations or route through internal NAT-like hops.
*Lack of TCP Window Scaling Optimization: Gateway endpoints may not leverage the same level of TCP acceleration or throughput tuning that AWS applies to edge traffic.
Fargate Task ENI Placement: Your Fargate task’s ENI is bound to a specific subnet. If the subnet has additional hops or less optimal latency to the S3 regional endpoint (even inside AWS), that can introduce slight delays.
No caching benefit: Public path traffic may be hitting previously cached data or using optimized network paths. VPC Gateway Endpoint traffic is direct to S3's storage backend, with no caching layer.
S3 Transfer Acceleration ≠ Gateway Endpoints: Many confuse the performance benefits of S3 Transfer Acceleration (which uses edge locations and CloudFront) with VPC gateway endpoints. They serve different purposes: acceleration is for speed; endpoint is for security and cost.
Recommendations: Run traceroute or mtr from a Fargate container to see hop differences. Try using S3 Transfer Acceleration (for public access) if speed is critical. If private access is a must, consider testing with S3 Interface Endpoints (powered by PrivateLink) instead of gateway endpoints, those sometimes offer better throughput in isolated setups. Test across different subnets/AZs to rule out internal latency quirks.
There are several potential reasons why your S3 VPC gateway endpoint might be performing slower than the internet gateway:
-
Network path optimization: When using an internet gateway, your traffic might be taking advantage of AWS's optimized network paths that aren't being utilized with the gateway endpoint. The internet gateway might be routing your traffic through AWS's global network infrastructure in a more efficient way.
-
Configuration issues: The bucket policy you added to ensure traffic goes through the VPC endpoint might be introducing additional processing overhead. Each request needs to be validated against this policy, which could add latency.
-
Resource allocation: The resources allocated to handle gateway endpoint traffic might be different from those handling internet gateway traffic, potentially leading to different performance characteristics.
-
Transfer Acceleration: If you were previously benefiting from S3 Transfer Acceleration (which uses CloudFront's globally distributed edge locations), this wouldn't be used with the VPC endpoint.
To improve your download speeds, you could try:
- Optimizing your AWS CLI configuration parameters for S3 operations
- Using third-party tools like rclone to compare performance
- Enabling S3 Transfer Acceleration on your bucket (note that this has additional costs)
- Implementing parallel downloads using multiple threads if you're downloading multiple files
- Increasing the memory allocation for your Fargate task, which can proportionally increase CPU resources
If you need to use the VPC endpoint for security or compliance reasons, you might need to accept the performance trade-off or explore other optimization techniques.
Sources
S3 download file is slowly via DC | AWS re:Post
Accessing S3 Objects takes time | AWS re:Post
Downloading PDF from s3 bucket takes too much time. | AWS re:Post
“Internet Gateway vs Gateway Endpoint” ,Why IGW might feel faster:
It’s true that Gateway Endpoints keep traffic entirely within AWS and are more secure and cost-efficient. However, latency and throughput aren’t always guaranteed to be better.
When using the Internet Gateway, AWS may leverage Global Accelerator-style routing, especially for public S3 access; this can unintentionally provide better-optimized network paths, even within the same region.
**Your assumption is correct, ** VPC Gateway Endpoints are internal, but they can still introduce regional network hops or resource bottlenecks depending on internal routing or service limits.
Bucket Policy Overhead: Agreed. The policy shouldn't add noticeable latency unless it's complex or has conditional logic. If you’ve tested with and without and seen no major change, you can rule this out.
**Resource Allocation Differences: **This is a valid hypothesis. Internally, endpoint-backed routing may hit different infrastructure paths, especially under burst conditions or heavy regional traffic.
While it's not "unreliable," there can be performance variance, particularly when: Using Fargate, where ENI placement affects performance. Transferring large files Experiencing shared endpoint contention. Transfer Acceleration:
Confirmed, if you're not using Transfer Acceleration and both ECS and S3 are in the same region, that shouldn't factor in here.
FinallyL: If you're optimizing for performance over strict network isolation, the public S3 path (via IGW) can sometimes perform better, even inside AWS. For secure/private access, VPC Gateway Endpoints are still best practice.
If this latency delta is a concern, consider running larger sample tests or looking at Interface Endpoints (S3 PrivateLink) as a potential alternative for finer-grained control and visibility.
Coming in a little late, but adding some data to this conversation:
I just ran a test in ap-southest-2 and eu-west-3 doing repeated downloads (using very similar code - see below) and I'm not seeing a difference in average transfer times when using IGW as compared to the S3 Gateway Endpoint. Yes, there are variations but there were no differences of the magnitude mentioned in the original question. My tests were using a one gigabyte file on an instance with 12 Gb/s of bandwidth (c6g.12xlarge).
I did run an earlier test on a t4g.large and the results were highly variable but that is due to the networking capacity on the instance itself. This might be the issue when running a test from with Fargate - perhaps.
In short: There should be very little (if any) different in performance between using Internet Gateway vs. S3 Gateway Endpoint and as per the other answers, using S3 Gateway Endpoint is always a good recommendation because it costs you no extra and it gives the the ability to put access control in the path if you need it. If your instances/containers are running via NAT Gateway to get to S3 then the Gateway Endpoint will reduce costs because that traffic doesn't have to be handled by NAT Gateway.
The code I used:
import boto3
import time
s3 = boto3.client('s3')
for i in range(10):
startTime = time.time()
s3.download_file(bucketName, sourceName, f'download{i}')
endTime = time.time()
print(f'{i}: {endTime-startTime}')
Relevant content
- asked 2 years ago
-> It seems strange again. For me, a VPC gateway endpoint keeps the flows in the AWS architecture, whereas the internet gateway makes you go out on the internet, and should then be slower.
-> I have similar results in terms of latency with and without bucket policy.
-> I think it might be the case. Then, I wonder if it often happen, and if I should consider the VPC s3 gateway endpoint as unreliable.
The ECS task is in the same region as the s3 bucket, and I don't use s3 transfer acceleration.