- Newest
- Most votes
- Most comments
To the best of my understanding, pipe mode decreases startup times, but frequently increases the bill.
The SageMaker billing starts after the data has been copied onto the container in File mode and control is transferred to the user script.
Reading the data in pipe mode starts after control is transferred, so the data transfer happens during the billable time.
Further the data is, to the best of my knowledge, not hitting the disk (EBS). This is fast, but also means that if you pass over your data multiple times, you have to re-read it again, on your dime (S3 requests and container wait times).
Pipe mode is still a good idea. For example if you have only few passes over the data and the data is rather large, so that it would not fit on an EBS volume.
Also, in PyTorch for example, data loading can happen in parallel. So while the GPU is chucking away on one batch, the CPUs load and prepare the data for the next batch.
Relevant content
- asked 2 years ago
- AWS OFFICIALUpdated 3 months ago
- AWS OFFICIALUpdated 2 years ago
- Why doesn't my SageMaker Studio Classic notebook in VPC only mode connect with my KernelGateway app?AWS OFFICIALUpdated 3 months ago
- AWS OFFICIALUpdated 2 months ago