Container cannot bind to port 80 running as non-root user on ECS Fargate

2

I have an image that binds to port 80 as a non-root user. I can run it locally (macOS Monterey, Docker Desktop 4.7.1) absolutely fine.

When I try and run it as part of an ECS service on Fargate it fails as so:

Failed to bind to 0.0.0.0/0.0.0.0:80

caused by SocketException: Permission denied

Fargate means I have to run the task in network mode awsvpc - not sure if that's related?

Any views on what I'm doing wrong? The best practices document suggests that I should be running as non-root (p.83) and that under awsvpc it's reasonable to expose port 80 (diagram on p.23).

FWIW here's a mildly cut down version the JSON from my task definition:

{
    "taskDefinitionArn": "arn:aws:ecs:us-east-1:<ID>:task-definition/mything:2",
    "containerDefinitions": [
        {
            "name": "mything",
            "image": "mything:latest",
            "cpu": 0,
            "memory": 1024,
            "portMappings": [
                {
                    "containerPort": 80,
                    "hostPort": 80,
                    "protocol": "tcp"
                }
            ],
            "essential": true,
            "environment": []
        }
    ],
    "family": "mything",
    "executionRoleArn": "arn:aws:iam::<ID>:role/ecsTaskExecutionRole",
    "networkMode": "awsvpc",
    "revision": 2,
    "volumes": [],
    "status": "ACTIVE",
    "requiresAttributes": [
        {
            "name": "com.amazonaws.ecs.capability.logging-driver.awslogs"
        },
        {
            "name": "ecs.capability.execution-role-awslogs"
        },
        {
            "name": "com.amazonaws.ecs.capability.ecr-auth"
        },
        {
            "name": "com.amazonaws.ecs.capability.docker-remote-api.1.19"
        },
        {
            "name": "ecs.capability.execution-role-ecr-pull"
        },
        {
            "name": "com.amazonaws.ecs.capability.docker-remote-api.1.18"
        },
        {
            "name": "ecs.capability.task-eni"
        }
    ],
    "placementConstraints": [],
    "compatibilities": [
        "EC2",
        "FARGATE"
    ],
    "runtimePlatform": {
        "operatingSystemFamily": "LINUX"
    },
    "requiresCompatibilities": [
        "FARGATE"
    ],
    "cpu": "256",
    "memory": "1024",
    "tags": []
}
asked 3 years ago12.5K views
2 Answers
3
Accepted Answer

I don't think this is specific to Fargate, rather than the actual container runtime.

It works when you run locally through Docker because Docker sets net.ipv4.ip_unprivileged_port_start=0, basically allowing you to bind any port as unprivileged. (more details: https://github.com/moby/moby/pull/41030).

If you want to run with a non-root user you will need to use a non-privileged port and modify your configuration to expose on that port.

profile pictureAWS
answered 3 years ago
profile picture
EXPERT
reviewed 4 months ago
profile picture
EXPERT
reviewed a year ago
  • Thanks, that makes sense. I have indeed resorted to running on a port > 1024.

    Would AWS consider making the same change? As I linked in the question, the best-practices documentation simultaneously suggests running as non-root and binding to port 80, so it's a little surprising that this doesn't work.

    I suspect it's also a pretty normal workflow to use docker to build and run images locally, then expect them to work when pushed to ECS.

    I raised https://github.com/aws/containers-roadmap/issues/1721 .

0

I have never had that issue and ran lots of containers with ports in the 1-1024 range (which usually requires root). As far as running the app as root vs as another user, don't forget that, you might have to use the root user to start the process that will open the socket on your machine, but, run the application itself with another user. Typically that's what nginx would do, as you can see.

docker run -d --rm -it nginx -p 80:80
ps faux | grep nginx
root      207518  0.0  0.0   8856  6320 pts/0    Ss+  18:07   0:00  \_ nginx: master process nginx -g daemon off;
101       207568  0.0  0.0   9244  2508 pts/0    S+   18:07   0:00      \_ nginx: worker process
101       207569  0.0  0.0   9244  2508 pts/0    S+   18:07   0:00      \_ nginx: worker process
101       207570  0.0  0.0   9244  2508 pts/0    S+   18:07   0:00      \_ nginx: worker process
101       207571  0.0  0.0   9244  2508 pts/0    S+   18:07   0:00      \_ nginx: worker process

(101 is the nginx user btw). I ain't much of a macOS user myself so maybe that is just more permissive than Linux with regards to ports to open. Don't forget that with containers you use the same kernel (and therefore its settings) but running your container elsewhere, with possibly hardened host, might not work anymore.

profile picture
answered 3 years ago
  • As you say, by default the nginx image runs the root process as root - it just spawns subprocesses as a different user. As I linked in the question, the AWS ECS best practices document actively advises against this and suggests using a non-root user for the root process, which is why I am surprised that I then cannot bind to the default ports.

    I've just been playing in an Ubuntu 20.04 VM with this Dockerfile:

    FROM alpine
    
    RUN apk update && apk add netcat-openbsd
    
    RUN addgroup -g 2221 -S appgroup && adduser -u 2222 -S appuser -G appgroup
    
    USER appuser
     
    EXPOSE 80
    
    ENTRYPOINT ["nc", "-k", "-l", "80"]

    Running as so: docker run --rm -p8080:80 <mytag> and it works fine, despite the container binding to its own port 80 and ubuntu not allowing non-root users to bind to port 80. I can send data to it from another netcat session. If I do a ps on the host (not the container) I can see nc -l 80 running as uid 2222, and from the host I can netcat localhost 8080 and send data to the container.

    However, if I try this: docker run --rm --network=host <mytag> unsurprisingly it fails because ubuntu won't allow me to bind to port 80 on the host.

    Which is what makes me think that awsvpc network mode is something like host network mode... but it just seems so wrong given the best practice documentation.

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions