ECS network mode bridge not behaving like docker networks

-1

The ECS developer guide claims...

If the network mode is bridge, the task utilizes Docker's built-in virtual network which runs inside each container instance.

This appears to be 100% false. If it were true, a container named "database" would resolve to an IP if I reference it with any application; i.e. behave like an A record was created for me.

Is there something that I have control over that would could cause this to NOT work like a docker network?

Thanks.

質問済み 2年前2147ビュー
3回答
3

Thanks for the candid feedback. For context, K8s and ECS are not that different. They both have a built in service discovery that can resolve a "service" name. With K8s, if you deploy the core-DNS component, it is done automatically and by default while with ECS you need to click on "Enable Service Discovery" when you create the ECS service. For example if you create an ECS service called mywebapp that fires up 3 ECS tasks underneath and you have defined a namespace called local all other tasks deployed in the cluster will be able to reach either one of those 3 tasks by referencing mywebapp.local. The differences in the networking layer are more substantial. K8s uses a network overlay which on AWS usually means AWSVPC (the same model you'd use for ECS if not using bridge). For ECS we have enabled ENI trunking which allows you to multiplex an ENI for multiple tasks. In addition ECS uses the traditional Docker "bridge" model but service discovery there does not work with standard A records because you need to also declare a port you want to reach (so you have to use SRV records). If I remember correctly Docker allowed internal A record style service discovery in swarm internal networks because they implemented an overlay as well (but I may be wrong I have not been looking into it in ages). All these techniques described is what Docker implemented in their new CLI under the covers to deploy to ECS. Which is similar to what Compose-X does (as described by John).

With all this said, if you feel more familiar and "at home" using K8s (hopefully Amazon EKS), you should totally use it. I will pass on the feedback to the ECS team which will definitely make good use of it. Thanks a lot.

AWS
エキスパート
回答済み 2年前
  • Yes, and my main point is this creates bad DX (What I call Developer Experience). It's not behaving like native docker networks. In native docker, you create a network, place tasks within that network, and they instantly get DNS resolution with "service-name". In UX, people talk about how it's terrible to make people click more than they need to, for example. I would expect a service like ECS to automatically create a network and place that task in that network, so that I don't have to do some special thing like creating a "local" discovery service, and referencing by "service-name.local".

  • creating a namespace of "local" doesn't make sense because it could only be used with one ECS cluster. The namespace names are unique. For the sake of argument I tried it anyway, and nope, it didn't do anything. And this makes sense, because bridge mode can't have A records, only SRV records.

1
承認された回答

If both your containers run on the same EC2 node then iirc, you should be able to do the resolve (been a long time since I last used bridge mode with ECS).

However, with that said, I would recommend against using the bridge mode (unless there is a strong reason to use it, i.e. to maintain compatibility with running in ECS Anywhere) ?

I would very highly recommend to move over to using AWSVPC networking mode, which would allow you to go from running on ECS or Fargate with 0 change in the way you then manage the network access between services (I'd advise on Security Group per service). When using the AWSVPC mode you have a high level of network isolation between containers running on a same host, which could be very valuable from a security point of view.

To maintain a docker-compose like naming of services to talk to each other, you can use AWS CloudMap which will automatically work with ECS to register your tasks and associate DNS records accordingly to allow this kind of DNS names resolution.

If in case you are using docker-compose locally today and want a way to do docker-compose up but against AWS ECS, try out ECS Compose-X which deals with network configurations, service-to-service ingress, additional AWS Resources, IAM etc. See examples / walkthrough here.

Advantage over Copilot is that you keep full docker-compose syntax compatibility to go from local to cloud. We use it everyday in dev/staging/prod and our devs are very autonomous as a result to do their local testing and without other changes, take it to CICD and therefore into production.

profile picture
回答済み 2年前
  • AWSVPC is very limited in the number of ENIs you can use when using smaller instances I think.

    If both your containers run on the same EC2 node then iirc, you should be able to do the resolve (been a long time since I last used bridge mode with ECS).

    Nope,this does not work at all. I'm running two containers in one task to ensure that they are running on the same host, and they can't see each other.

    I'm seriously thinking that ECS is simply subpar, for one primary reason: The networking consistently works in it's own special way, and nothing like you'd expect for a docker application. I learned k8s in about 2 days, simply because the network works the way one would expect if they have learned docker/docker-compose. I've been playing with ECS for about 1 to 1.5 weeks and haven't gotten anywhere with how the networking works, because it's not documented, and my guesses are incorrect. Not to mention it works entirely differently when run in different ways (EC2 vs fargate). One of the reasons it takes so long too is that deployments are 5-10 minutes; but that's a general AWS cloudformation problem. Boy I sure hope they fix that.

    Imagine if ECS container X could resolve container Y, in the same cluster, no matter what mode it was configured to use? Oh, right, that would be k8s. :D

  • I've accepted your answer, because it at least helps validate how convoluted ECS really is.

1

If you run multiple containers in the same task that uses bridge networking, you can reference one container from the other by name by configuring the appropriate links in the container definition.

For example, if you have a Task Definition with a container named main and another called sidecar, you can publish the IP address of sidecar in the /etc/hosts file of the main container like so:

        "containerDefinitions": [
            {
                "name": "main",
                "links": [
                    "sidecar"
                ],
                ...
            },
            {
                "name": "sidecar",
                ...
            }
         ]

In the main container, you will see the IP published:

127.0.0.1       localhost
::1     localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.4      sidecar sidecar ecs-EcsBridgeContainerNameTestStackTaskDefBF06B5D6-6-sidecar-c2ad99be83f489e00c00
172.17.0.5      main

See also the ECS Task Definition documentation under "Network Settings".

AWS
エキスパート
回答済み 2年前
  • Good answer, that would probably work; I'm not going back to test it mind you. I'm back to using AWSVPC. But, this is a classic example of how ECS expects you to know "special stuff" to be able to use it. With k8s, everything works the same way docker does. i.e. EVERY container can access EVERY other container in the same network, by name, without additional configuration.

  • Just a reminder: "The --link flag is a legacy feature of Docker"

ログインしていません。 ログイン 回答を投稿する。

優れた回答とは、質問に明確に答え、建設的なフィードバックを提供し、質問者の専門分野におけるスキルの向上を促すものです。

質問に答えるためのガイドライン

関連するコンテンツ