- Neueste
- Die meisten Stimmen
- Die meisten Kommentare
When components run containers, there are a couple things that need to be completed. You've confirmed the first which is injecting the AWS_CONTAINER_CREDENTIALS_FULL_URI
environment variable into the container.
The Nucleus sets up the credential provider (TES) on localhost
of the host system. In order for containers to access that instead of their own localhost, the container needs access to the host's networking namespace as mentioned here.
For the container's docker run
command (or equivalent if using other container manager or docker-compose), try adding --network host
and see if curl
works. If so, you're container should be able to use the endpoint to get credentials.
If this doesn't work, can you provide the lifecycle portion of the recipe file?
Yes, i am using network_mode as host in docker-compose file.
docker-compose file for reference.
version: "3"
networks:
core:
services:
core:
image: "XXXXXXX.dkr.ecr.us-east-1.amazonaws.com/coreservice-1.0:latest"
container_name: core
# networks:
# - core
network_mode: host
environment:
AWS_GG_NUCLEUS_DOMAIN_SOCKET_FILEPATH_FOR_COMPONENT: ${AWS_GG_NUCLEUS_DOMAIN_SOCKET_FILEPATH_FOR_COMPONENT}
SVCUID: ${SVCUID}
volumes:
- ${AWS_GG_NUCLEUS_DOMAIN_SOCKET_FILEPATH_FOR_COMPONENT}:${AWS_GG_NUCLEUS_DOMAIN_SOCKET_FILEPATH_FOR_COMPONENT}
stowservice:
image: "XXXXXXX.dkr.ecr.us-east-1.amazonaws.com/stowworkcellservice-1.0:latest"
container_name: stowservice
network_mode: host
# networks:
# - core
# - host
# ports:
# - '38134:38135'
environment:
AWS_GG_NUCLEUS_DOMAIN_SOCKET_FILEPATH_FOR_COMPONENT: ${AWS_GG_NUCLEUS_DOMAIN_SOCKET_FILEPATH_FOR_COMPONENT}
SVCUID: ${SVCUID}
AWS_CONTAINER_CREDENTIALS_FULL_URI: ${AWS_CONTAINER_CREDENTIALS_FULL_URI}
AWS_CONTAINER_AUTHORIZATION_TOKEN: ${AWS_CONTAINER_AUTHORIZATION_TOKEN}
PROVISION: "true"
depends_on:
- core
- ledservice
- scannerservice
volumes:
- ${AWS_GG_NUCLEUS_DOMAIN_SOCKET_FILEPATH_FOR_COMPONENT}:${AWS_GG_NUCLEUS_DOMAIN_SOCKET_FILEPATH_FOR_COMPONENT}
command: --uri localhost:4400 --port 9100 --name stow
Lifecycle class for reference :
"Manifests": [
{
"Platform": {
"os": "all"
},
"Lifecycle": {
"Setenv": {
},
"Run": "docker rm core -f && docker rm stowservice -f && docker-compose -f {artifacts:path}/docker-compose.yml up -d"
},
"Artifacts": [
{
"URI": "docker:XXXXXXX.dkr.ecr.us-east-1.amazonaws.com/coreservice-1.0:latest"
},
{
"URI": "s3://bucket/docker-compose.yml"
},
{
"URI": "docker:XXXXXXX.dkr.ecr.us-east-1.amazonaws.com/stowworkcellservice-1.0:latest"
}
]
}
],
Relevanter Inhalt
- AWS OFFICIALAktualisiert vor 3 Monaten
- AWS OFFICIALAktualisiert vor einem Jahr
- AWS OFFICIALAktualisiert vor 10 Monaten
Added lifecycle portion below. curl ${AWS_CONTAINER_CREDENTIALS_FULL_URI} is giving empty response, when IOT core software installed using default provision.
curl ${AWS_CONTAINER_CREDENTIALS_FULL_URI} is giving curl: (7) Failed to connect to localhost port 38135 after 0 ms: Connection refused", when IOT core software installed using provision=true
Can you test
netstat -na
inside the container and see if the host's processes are showing? Based on the recipe and compose files, this should be working. And to confirm, is Greengrass Core nucleus running as a process along side dockerd on the host? Asking to clarify thecoreservice
container.