- Más nuevo
- Más votos
- Más comentarios
I don't think this is specific to Fargate, rather than the actual container runtime.
It works when you run locally through Docker because Docker sets net.ipv4.ip_unprivileged_port_start=0
, basically allowing you to bind any port as unprivileged. (more details: https://github.com/moby/moby/pull/41030).
If you want to run with a non-root user you will need to use a non-privileged port and modify your configuration to expose on that port.
I have never had that issue and ran lots of containers with ports in the 1-1024 range (which usually requires root). As far as running the app as root vs as another user, don't forget that, you might have to use the root user to start the process that will open the socket on your machine, but, run the application itself with another user. Typically that's what nginx would do, as you can see.
docker run -d --rm -it nginx -p 80:80
ps faux | grep nginx
root 207518 0.0 0.0 8856 6320 pts/0 Ss+ 18:07 0:00 \_ nginx: master process nginx -g daemon off;
101 207568 0.0 0.0 9244 2508 pts/0 S+ 18:07 0:00 \_ nginx: worker process
101 207569 0.0 0.0 9244 2508 pts/0 S+ 18:07 0:00 \_ nginx: worker process
101 207570 0.0 0.0 9244 2508 pts/0 S+ 18:07 0:00 \_ nginx: worker process
101 207571 0.0 0.0 9244 2508 pts/0 S+ 18:07 0:00 \_ nginx: worker process
(101 is the nginx user btw). I ain't much of a macOS user myself so maybe that is just more permissive than Linux with regards to ports to open. Don't forget that with containers you use the same kernel (and therefore its settings) but running your container elsewhere, with possibly hardened host, might not work anymore.
As you say, by default the
nginx
image runs the root process as root - it just spawns subprocesses as a different user. As I linked in the question, the AWS ECS best practices document actively advises against this and suggests using a non-root user for the root process, which is why I am surprised that I then cannot bind to the default ports.I've just been playing in an Ubuntu 20.04 VM with this Dockerfile:
FROM alpine RUN apk update && apk add netcat-openbsd RUN addgroup -g 2221 -S appgroup && adduser -u 2222 -S appuser -G appgroup USER appuser EXPOSE 80 ENTRYPOINT ["nc", "-k", "-l", "80"]
Running as so:
docker run --rm -p8080:80 <mytag>
and it works fine, despite the container binding to its own port 80 and ubuntu not allowing non-root users to bind to port 80. I can send data to it from another netcat session. If I do aps
on the host (not the container) I can seenc -l 80
running as uid 2222, and from the host I cannetcat localhost 8080
and send data to the container.However, if I try this:
docker run --rm --network=host <mytag>
unsurprisingly it fails because ubuntu won't allow me to bind to port 80 on the host.Which is what makes me think that
awsvpc
network mode is something likehost
network mode... but it just seems so wrong given the best practice documentation.
Contenido relevante
- OFICIAL DE AWSActualizada hace 5 meses
- OFICIAL DE AWSActualizada hace un año
- OFICIAL DE AWSActualizada hace 2 años
- OFICIAL DE AWSActualizada hace 9 meses
Thanks, that makes sense. I have indeed resorted to running on a port > 1024.
Would AWS consider making the same change? As I linked in the question, the best-practices documentation simultaneously suggests running as non-root and binding to port 80, so it's a little surprising that this doesn't work.
I suspect it's also a pretty normal workflow to use docker to build and run images locally, then expect them to work when pushed to ECS.
I raised https://github.com/aws/containers-roadmap/issues/1721 .