- 最新
- 最多得票
- 最多評論
So I figured out what was happening.... This was a dev-test box where we were testing various config stacks with docker-compose
and a named networks:
section. Each time a docker-compose up -d
was executed, compose was recreating the network and incrementing the CIDR block starting from the default 172.17.0.0/16. Once it got to 172.27.0.0/16 after 10 restarts, it created a bridge interface that sat on top of the CIDR for our VPN routes.
ubuntu@ip-172-31-128-87:~$ netstat -rn
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
0.0.0.0 172.31.128.1 0.0.0.0 UG 0 0 0 ens5
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
172.19.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-61dfa3cb04db
172.27.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-889068f61237
172.31.128.0 0.0.0.0 255.255.254.0 U 0 0 0 ens5
This is why it was only accesible via another instance in the 172.31.0.0/16. Even after and instance reboot, docker-compose
held the network bridge config even though the containers had crashed. Another docker-compose down
cleaned them up then we were able to pin the network creation to a non-conflicting CIDR in docker-compose.yml
networks:
mynet:
ipam:
driver: default
config:
- subnet: 172.23.0.0/16
From what you are explaining here, the right path should be to open a support ticket as the support engineers have the right tools to analyze the situation and help you to diagnose what's happening.
Best,
相關內容
- 已提問 6 個月前
- AWS 官方已更新 1 年前
- AWS 官方已更新 2 年前
- AWS 官方已更新 2 年前