- Mais recentes
- Mais votos
- Mais comentários
Indeed, containers that belong to the same task can also communicate over the localhost
interface. Reference.
Looking at the logs, it looks like you are using the TCP
protocol to communicate while the Task Definition PortMapping
value is configuring HTTP
instead.
From the docs:
appProtocol: If you don't set a value for this parameter, then TCP is used
I'd try to remove the appProtocol
to see if it would work. If this doesn't work, I'd check with the Flink team what are the suggestions to have this communication working using localhost on a sidecar container.
Hope this helps!
What we managed to test:
- created another task with similar deployment, 1 container fastapi app and 1 container that constantly call app. It's working perfectly without any port mapping. I setup session manager on containers, connected via ssh I tested curl, nc and telnet commands with both localhost and 127.0.0.1 ip.
- Did the same on Flink container also there is traffic, however my colleague suggests the problem could be RPC traffic as we found this old issue: https://github.com/grpc/grpc/issues/19633
At the moment we are looking for the way to test it. Any Ideas?
Each container running on Fargate will receive it's own separate runtime environment. That means that the container cannot connect to another container using the loopback (127.0.0.1) address. You must use the actual IP address for the other container - the one that is allocated to the container within the VPC. You'll need to look this up by using a service discovery tool or by querying the AWS APIs.
Indeed, every day is a learning day - see the answer from Henrique Santana - this is indeed possible.
Conteúdo relevante
- AWS OFICIALAtualizada há 9 meses
- AWS OFICIALAtualizada há um ano
@Bretski-AWS, thank you for your comment. I don't understand if these knowledge articles are incorrect or my interpretation is wrong? Also, maybe I would be able to achieve what I this with launch type - EC2 and network mode bridge?