AWS re:Invent 2024 - Containers or serverless functions: A path for cloud-native success
This blog post summarizes the AWS re:Invent 2024 session "Containers or serverless functions: A path for cloud-native success" presented by Maximilian Schellhorn, Senior Solutions Architect at AWS, and Emily Shea, Head of Serverless Go-To-Market at AWS. W The post explores key differences between these compute options, factors influencing your decision-making, and real-world examples showing successful implementations of each approach.
Should you use containers or serverless functions for your application? At AWS re:Invent 2024, Maximilian Schellhorn and Emily Shea provided a practical guide to one of the most common architectural decisions in modern cloud development. As organizations build cloud-native products, this decision can significantly impact your operational model, cost structure, and development experience.
Let's dive into the insights they shared about these technologies and how to choose between them.
Understanding the Fundamentals
The Container Approach
Containers have revolutionized application deployment by sharing the host operating system's kernel while isolating application dependencies. Unlike virtual machines, containers are lightweight and resource-efficient—you can run many more containers than VMs on the same hardware, and they start up faster.
A container consists of several key components:
- A container image that bundles the base operating system, runtime (like Java or JavaScript), frameworks, and your application code
- A container runtime that makes your application portable across environments
- A container orchestrator that manages deployment, scaling, and health monitoring
When deploying containers, you have two primary models to consider:
Server-based containers provide full control of infrastructure. You manage the compute instances (like Amazon EC2), networking, load balancing, and orchestration with services like Amazon Elastic Kubernetes Service (Amazon EKS). While this offers maximum flexibility, it also means you're responsible for cluster operations, maintenance, security updates, and capacity planning.
Serverless containers (like AWS Fargate) reduce your operational burden by eliminating the need to manage the underlying infrastructure. With AWS Fargate, containers are placed on fully-managed compute resources that handle patching, scaling, and high availability for you.
The Serverless Functions Approach
AWS Lambda represents the serverless functions model, where you focus purely on your code without worrying about the infrastructure. An AWS Lambda function consists of:
- A function handler - the code that runs when your function is invoked
- A Firecracker micro VM - a lightweight isolation environment managed by AWS
- The AWS Lambda service - which handles invocation, scaling, and resource management
The key distinction is that with AWS Lambda, you're not directly calling your function code or exposing ports like you would with containers. Instead, the AWS Lambda service sits between your function and the outside world, managing invocations and passing data to and from your code.
Emily emphasized AWS Lambda's event-driven integration capabilities. Using event source mappings, AWS Lambda can automatically poll services like Amazon Simple Queue Service (Amazon SQS), Amazon DynamoDB Streams, or Amazon Managed Streaming for Apache Kafka (Amazon MSK), invoking your functions when new data arrives without you writing polling code.
The Scaling Story
Understanding the scaling characteristics of each technology helps clarify when to use each approach.
Container Scaling
Containers are always running, waiting for requests to process. When traffic increases, a container orchestrator adds more container instances based on CPU or memory utilization thresholds (typically around 60-70%). This scaling model works well for predictable, steady workloads but requires maintaining some excess capacity for sudden traffic spikes.
Serverless Function Scaling
AWS Lambda functions scale differently:
- They start with zero capacity when there's no traffic
- They scale up per-request, processing each unit of work independently
- When requests arrive while all existing functions are busy, AWS Lambda automatically creates new execution environments
- After traffic subsides, idle execution environments eventually shut down
This "scale from zero" capability means you only pay when processing requests, but it introduces "cold starts"—brief delays when spinning up new execution environments. AWS has improved cold start performance with features like Snap Start, which can reduce cold start times by up to 10x for Java, Python, and .NET functions.
Making the Decision: Four Key Considerations
1. Operations
Containers offer a spectrum of control:
- Full flexibility over hardware selection (CPU, memory, GPU)
- Choice of operating systems and runtimes
- Ability to SSH into instances for troubleshooting
- Greater responsibility for patching, scaling, and high availability
Serverless Functions prioritize simplicity:
- No infrastructure management
- Auto-scaling and multi-AZ deployment
- Managed runtimes that are automatically updated
- Limited customization (memory size dictates CPU and network bandwidth)
2. Integrations
Containers provide flexibility with:
- Support for any port or protocol (HTTP, WebSockets, gRPC, TCP)
- Access to the broad CNCF ecosystem (Prometheus, KEDA, etc.)
- Ability to run infrastructure components like databases or message brokers
Serverless Functions stand out with native service integrations:
- Built-in triggers from AWS services (Amazon S3, Amazon API Gateway, Amazon EventBridge)
- Automatic scaling based on queue depth or stream activity
- Simplified event-driven programming model
3. Portability
Containers are inherently portable at the application level but face challenges with:
- Orchestrator dependencies (load balancers, CNI plugins, storage drivers)
- Environment-specific configuration
Serverless Functions have different portability considerations:
- Can be packaged as container images using AWS Lambda base images
- May require adapters to handle differences in invocation models
- Benefit from hexagonal architecture patterns that separate business logic from AWS-specific code
Max demonstrated how using abstractions and adapters in function code can isolate core business logic from platform-specific implementations, making it possible to move between compute platforms.
4. Pricing
Containers follow a resource-based pricing model:
- You pay for the underlying compute regardless of utilization
- Pricing is by seconds or minutes of running time
- More cost-effective for high, consistent utilization
- Can use reserved instances or Savings Plans for committed usage
Serverless Functions use a consumption-based model:
- Pay only when code runs, with millisecond granularity
- No charges when idle (scale to zero)
- More cost-effective for variable or spiky workloads
- Lower infrastructure overhead when experimenting globally
Max compared this to owning a car versus using a car-sharing service. Car-sharing (like serverless) is cost-effective for occasional use but becomes expensive for constant usage, where car ownership (like containers) makes more economic sense.
Real-World Success Stories
The presentation included four customer examples showing different approaches:
Delivery Hero: Platform Engineering with Containers
Delivery Hero standardized their development platform using Amazon EKS, running over 300 clusters across 15,000 compute nodes. Their platform engineering approach provides consistent tooling for 3,000+ developers across multiple subsidiaries, handling 10 million daily orders.
UK Driver and Vehicle Licensing Agency: Hybrid Approach
DVLA modernized their driver services using both containers and serverless:
- Containerized platform for standard UI components (Ruby on Rails) and Java services
- Serverless functions and AWS Step Functions workflows for driver's license processing
- Event-driven architecture supporting human-in-the-loop approvals for license photos
Autodesk: Serverless for Variable Compute Demand
When building Autodesk Forma, a tool for architects to run simulations, they switched from Kubernetes to AWS Lambda functions because:
- Simulations had unpredictable, variable demand
- Resource-intensive workloads needed rapid scaling
- They wanted to avoid cluster management overhead
Their solution uses AWS Lambda functions to parallelize simulations, with Amazon Elastic Container Service (Amazon ECS) containers for GPU-intensive workloads like sunlight ray tracing.
LexWare Office: Modernization Journey
The LexWare Office's journey included:
- Initial lift-and-shift to the cloud
- Containerization of core application components
- Serverless functions for event notifications (like invoice payment webhooks)
They chose serverless for notifications because of extremely spiky traffic patterns (up to 8,000 requests per minute, then dropping to zero), making always-on capacity inefficient.
Conclusion: Choose Based on Your Workload
The speakers emphasized that the choice between containers and serverless functions should be data-driven and based on non-functional requirements rather than application type. Container-based architectures perform best with predictable, constant workloads, while serverless functions shine with variable, spiky demands.
Many organizations find success using both approaches for different parts of their applications, taking advantage of the strengths of each technology where it makes the most sense. The recommendation is clear: understand your workload patterns first, then choose the approach that best matches your operational preferences, integration needs, portability requirements, and cost constraints.
For those interested in watching the full session, including detailed explanations and demonstrations from Maximilian Schellhorn and Emily Shea, the recording is available on the AWS YouTube channel.
Relevant content
- Accepted Answerasked 3 years agolg...
- asked 2 years agolg...
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 7 months ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 2 years ago