Request Flow: Client to Public ALB to Private Broker Service to Private Helper Service!
I expected that Service Connect would utilize the private IP from the VPC CIDR block, but it consistently follows its own pattern of using 127... every time.
In my application, there's a route like this: http://router-2017814625.ca-central-1.elb.amazonaws.com/cat
It displays the /etc/hosts file of my broker service:
127.0.0.1 localhost
172.31.56.50 ip-172-31-56-50.ca-central-1.compute.internal
127.255.0.1 broker
2600:f0f0:0:0:0:0:0:1 broker
127.255.0.2 helper
2600:f0f0:0:0:0:0:0:2 helper
Here, all DNS resolutions are mapped to IP addresses starting with 127.255, while my CIDR block is in the range of 172.31.... So, what is actually happening here? Is Service Connect creating its own virtual network like a VPC?
Because pinging the helper microservice also works from those unknown IPs:
http://router-2017814625.ca-central-1.elb.amazonaws.com/ping/helper
As there are two tasks under the helper service, it toggles between:
Response: <h1 style='text-align:center'>Passport: CYsjIx0zMvQk3Vg5BKt9wBYabrSnfj7RG8Kq8y6sWjHHs6irP1</h1> &
Response: <h1 style='text-align:center'>Passport: I9xlXUVpKHu2oWipby3hlohNgeWXhNNc7EiEU3tni6EXgYk4RV</h1>
And here comes the second doubt! In the /etc/hosts of the broker, only one IP is being shown for IPv4, and I have only one task under the broker service. Then how is it reaching or accessing both tasks inside the helper service and providing automatic load balancing to us?