- 新しい順
- 投票が多い順
- コメントが多い順
Hi Dirk-R,
The folder structure in the S3 bucket does not matter for the Artifacts parameter, you can choose any location accessible by the role policy used by greengrass. But in this case, you cannot add artifacts by merging configuration, they have to be specified in the component recipe.
Can you confirm the role that is being used by Greengrass is using the policy you have mentioned below?
Are you seeing errors or warnings in the Greengrass log during the deployment ?
Hope this helps,
David
Edited to add clarifications about the "artifacts" section and when it can be used.
Edited by: davidpAWS on Apr 22, 2021 11:23 AM
Hi David,
thanks for this important clarification that it's not possible to add artifacts by merging the configuration. (although I think I have see that somewhere in documentation examples, but I can't find it ;) )
It is not suitable for us to the artifacts feature in our use case, because we want different files on the different cores.
So we will continue with the option to read our configuration file from an s3 bucket.
Yes, we're sure that this role gets used, because its policy also contains the settings for the secrets manager what successfully works on our machine.
We had a look in the greengrass*.log file, as well as in our component's log files, but did find any further hint.
Today we investigated a little further in the code and it seems our request has been blocked by our company proxy (which worked fine with greengrass v1). After adding the related setting to the code we get a little further:
{noformat} s3 = boto3.client('s3', config=Config(proxies={'http': os.environ["HTTP_PROXY"]})) {noformat}
Now it seems we need to add the access_key_id and its secret!
Do you know a way to get both via the configured "GreengrassV2TokenExchangeRoleAccess". The IPC components, like the secret manager seems to do that behind the scene, but what do we have to code in python to do the same for the s3 access?
Thanks in advance
Kind regards,
Dirk
Edited by: Dirk-R on Apr 26, 2021 4:06 PM
Hello,
In order to use AWS credentials, please see https://docs.aws.amazon.com/greengrass/v2/developerguide/token-exchange-service-component.html. You must add aws.greengrass.TokenExchangeService as a dependency of your service. Then it will vend credentials to your AWS client using standard means. The credentials will have the permissions of the IAM Role which is aliased by the IoT Role alias that you've setup.
Cheers,
Michael Dombrowski
Hello Michael,
yes, we studied this and I would say we're fine with that.
For your information, we have set up this GG v2 Component using the aws console by choosing the "Import Lambda function".
This automatically created the recipe below in the end of this message.
It automatically created a dependency to the aws.greengrass.TokenExchangeService, but with VersionRequirement >=1.0.0.
When checking the deployment configuration I find the "aws.greengrass.TokenExchangeService" component listed in version 2.0.3.
Our core device has assigned a certificate that has our "PSNGreengrassCoreTokenExchangeRoleAliasPolicy" attached.
This policy says
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "iot:AssumeRoleWithCertificate",
"Resource": "arn:aws:iot:eu-west-1:111934086604:rolealias/PSNGreengrassCoreTokenExchangeRoleAlias"
}
]
}
This role alias points to the IAM role "PSNGreengrassV2TokenExchangeRole" where we frustratingly allowed all s3 actions on all ressources in our attached policy "PSNGreengrassV2TokenExchangeRoleAccess" and ensured the trusts "The identity provider(s) credentials.iot.amazonaws.com
arn:aws:iam::111934086604:role/cloudops"
What are we missing here to read s3 from our IoT component?
BTW - maybe this relates: I just tried to follow the "Hello world" example from the documentation, to get a minimum running. I created a "artifacts" folder in s3, added my hello_world.py to that folder, used aws console (GG v2) to create a new component choosing "Enter recipe as JSON", changed the example artifact uri to my recently uploaded hello_world.py. When clicking "Create Component" i get the error "Invalid Input: Encountered following errors in Artifacts: {s3://dil-sc-lev-s3-d8c4cbf1/artifacts/hello_world.py = Specified artifact resource cannot be accessed}". Does that relate to my initial issue?
added P.S.: I just realized that out IoT configuration is in region "Ireland eu-west-1" while the s3 bucket is create in region "Frankfurt eu-central-1". Can this be the root cause for our issue?
{
"RecipeFormatVersion": "2020-01-25",
"ComponentName": "boto3_s3_test",
"ComponentVersion": "1.1.2",
"ComponentType": "aws.greengrass.lambda",
"ComponentDescription": "missing \"import sys\" added",
"ComponentPublisher": "AWS Lambda",
"ComponentSource": "arn:aws:lambda:eu-west-1:111934086604:function:boto3_s3_test:4",
"ComponentConfiguration": {
"DefaultConfiguration": {
"lambdaExecutionParameters": {
"EnvironmentVariables": {}
},
"containerParams": {
"memorySize": 16000,
"mountROSysfs": false,
"volumes": {},
"devices": {}
},
"containerMode": "GreengrassContainer",
"timeoutInSeconds": 3,
"maxInstancesCount": 100,
"inputPayloadEncodingType": "json",
"maxQueueSize": 1000,
"pinned": true,
"maxIdleTimeInSeconds": 60,
"statusTimeoutInSeconds": 60,
"pubsubTopics": {}
}
},
"ComponentDependencies": {
"aws.greengrass.LambdaLauncher": {
"VersionRequirement": ">=1.0.0",
"DependencyType": "HARD"
},
"aws.greengrass.TokenExchangeService": {
"VersionRequirement": ">=1.0.0",
"DependencyType": "HARD"
},
"aws.greengrass.LambdaRuntimes": {
"VersionRequirement": ">=1.0.0",
"DependencyType": "SOFT"
}
},
"Manifests": [
{
"Platform": {},
"Lifecycle": {},
"Artifacts": [
{
"Uri": "greengrass:lambda-artifact.zip",
"Digest": "fx7d3M30aF/y5Q/8F0N2qpJjJK7HAAZGvZ/kXo7Apjc=",
"Algorithm": "SHA-256",
"Unarchive": "ZIP",
"Permission": {
"Read": "OWNER",
"Execute": "NONE"
}
}
]
}
],
"Lifecycle": {
"startup": {
"requiresPrivilege": true,
"script": "{aws.greengrass.LambdaLauncher:artifacts:path}/lambda-launcher start"
},
"setenv": {
"AWS_GREENGRASS_LAMBDA_CONTAINER_MODE": "{configuration:/containerMode}",
"AWS_GREENGRASS_LAMBDA_ARN": "arn:aws:lambda:eu-west-1:111934086604:function:boto3_s3_test:4",
"AWS_GREENGRASS_LAMBDA_FUNCTION_HANDLER": "lambda_function.lambda_handler",
"AWS_GREENGRASS_LAMBDA_ARTIFACT_PATH": "{artifacts:decompressedPath}/lambda-artifact",
"AWS_GREENGRASS_LAMBDA_CONTAINER_PARAMS": "{configuration:/containerParams}",
"AWS_GREENGRASS_LAMBDA_STATUS_TIMEOUT_SECONDS": "{configuration:/statusTimeoutInSeconds}",
"AWS_GREENGRASS_LAMBDA_ENCODING_TYPE": "{configuration:/inputPayloadEncodingType}",
"AWS_GREENGRASS_LAMBDA_PARAMS": "{configuration:/lambdaExecutionParameters}",
"AWS_GREENGRASS_LAMBDA_RUNTIME_PATH": "{aws.greengrass.LambdaRuntimes:artifacts:decompressedPath}/runtime/",
"AWS_GREENGRASS_LAMBDA_EXEC_ARGS": "[\"python3.7\",\"-u\",\"/runtime/python/lambda_runtime.py\",\"--handler=lambda_function.lambda_handler\"]",
"AWS_GREENGRASS_LAMBDA_RUNTIME": "python3.7"
},
"shutdown": {
"requiresPrivilege": true,
"script": "{aws.greengrass.LambdaLauncher:artifacts:path}/lambda-launcher stop; {aws.greengrass.LambdaLauncher:artifacts:path}/lambda-launcher clean"
}
}
}
Edited by: Dirk-R on May 3, 2021 12:43 PM
Dear all,
thank for the support!
Meanwhile I figured out that our problem was that the AWS IoT Service is located in region "Ireland (eu-west-1)" while the s3 bucket is located in region "Frankfurt (eu-central-1)".
It seems that this cross region access is not supported!
Reading file from an s3 Bucket located in region "Ireland (eu-west-1)" worked fine! (both bucket have the same permission settings)
Kind regards,
Dirk