Greengrass v2: 'Forbidden' (403) to get a file from S3

0

Hi,

we are using AWS Greengrass v2 and have successfully running a core device with code provided via a lambda function. We are already successfully retrieving secrets via the secret manager and stream our data via kinesis to the cloud.
But we still struggle to access/read a configuration file we have stored in a s3 bucket.

We tried two ways, both without success.
It would be great to get some advice to make at least one work.

  1. As this solution has beem migrated from GreenGrass v1 to v2 we had the following piece of code working fine:
{noformat}
try:
    s3 = boto3.client('s3')
    obj = s3.get_object(Bucket="OurBucketName", Key="OurFile1.csv")
    csv_file = obj['Body'].read().decode('utf-8-sig')
    print(csv_file)
except botocore.exceptions.ClientError as error:
    print('Error long: ', error.response)
{noformat}

But since migrating to Greengrass v2 we get a botocore exception (Failed due to: ClientError('An error occurred (403) when calling the GetObject operation: Forbidden'))

or as prompted by the code example:

...lambda_function.py:12,Error long: . {serviceInstance=0, serviceName=boto3_s3_test, currentState=RUNNING}
...lambda_function.py:12, . {serviceInstance=0, serviceName=boto3_s3_test, currentState=RUNNING}
...lambda_function.py:12,{'Error': {'Code': '403', 'Message': 'Forbidden'}, 'ResponseMetadata': {'RequestId': '', 'HostId': '', 'HTTPStatusCode': 403, 'HTTPHeaders': {'connection': 'Keep-Alive', 'content-type': 'text/html', 'cache-control': 'no-cache', 'content-length': '5748', 'x-frame-options': 'deny'}, 'RetryAttempts': 1}}. {serviceInstance=0, serviceName=boto3_s3_test, currentState=RUNNING}

Does anyone have an idea why we still get an error 403 (forbidden)?

  1. We also tried to deploy our configuration files using the Artifacts feature, but without success.
    We configured it in the "Configuration update" of the AWS component in the deployment configuration.
{
  "reset": [],
  "merge": {
    "Artifacts": [
      {
        "URI": "s3://OurBucketName/OurFile1.csv"
      },
      {
        "URI": "s3://OurBucketName/OurFile2.csv"
      }
    ]
  }
}

After the deployment we can't find our files on the greengrass core device, neither any hint in any of the log files.

The documentation come with this example for the Artifact URI:
"s3://DOC-EXAMPLE-BUCKET/artifacts/MyGreengrassComponent/1.0.0/artifact.py"
Is that folder structure .../artifacts/<ComponentName>/<ComponentVersion>/... required?

Does anyone have an idea why we don't get the artifacts on our greengrass core device?

BTW for both attempts we followed the documentation and adjusted our version of the "GreengrassV2TokenExchangeRoleAccess" policy to allow (meanwhile) all s3 actions on all resources.

{
	"Sid": "VisualEditor0",
	"Effect": "Allow",
	"Action": [
		"greengrass:*",
		"iot:Receive",
		"logs:CreateLogStream",
		"iot:Subscribe",
		"secretsmanager:*",
		"s3:*",
		"iot:Connect",
		"logs:DescribeLogStreams",
		"iot:DescribeCertificate",
		"logs:CreateLogGroup",
		"logs:PutLogEvents",
		"iot:Publish"
	],
	"Resource": "*"
}

Thanks in adance!
Regards,
Dirk_R

P.S.: Editing a post to get a proper result is definitely a pain in neck!

Dirk-R
已提問 3 年前檢視次數 941 次
5 個答案
0

Hi Dirk-R,

The folder structure in the S3 bucket does not matter for the Artifacts parameter, you can choose any location accessible by the role policy used by greengrass. But in this case, you cannot add artifacts by merging configuration, they have to be specified in the component recipe.

Can you confirm the role that is being used by Greengrass is using the policy you have mentioned below?

Are you seeing errors or warnings in the Greengrass log during the deployment ?

Hope this helps,
David

Edited to add clarifications about the "artifacts" section and when it can be used.

Edited by: davidpAWS on Apr 22, 2021 11:23 AM

AWS
已回答 3 年前
0

Hi David,

thanks for this important clarification that it's not possible to add artifacts by merging the configuration. (although I think I have see that somewhere in documentation examples, but I can't find it ;) )
It is not suitable for us to the artifacts feature in our use case, because we want different files on the different cores.
So we will continue with the option to read our configuration file from an s3 bucket.

Yes, we're sure that this role gets used, because its policy also contains the settings for the secrets manager what successfully works on our machine.
We had a look in the greengrass*.log file, as well as in our component's log files, but did find any further hint.

Today we investigated a little further in the code and it seems our request has been blocked by our company proxy (which worked fine with greengrass v1). After adding the related setting to the code we get a little further:

{noformat}
s3 = boto3.client('s3', config=Config(proxies={'http': os.environ["HTTP_PROXY"]}))
{noformat}

Now it seems we need to add the access_key_id and its secret!

Do you know a way to get both via the configured "GreengrassV2TokenExchangeRoleAccess". The IPC components, like the secret manager seems to do that behind the scene, but what do we have to code in python to do the same for the s3 access?

Thanks in advance
Kind regards,
Dirk

Edited by: Dirk-R on Apr 26, 2021 4:06 PM

Dirk-R
已回答 3 年前
0

Hello,
In order to use AWS credentials, please see https://docs.aws.amazon.com/greengrass/v2/developerguide/token-exchange-service-component.html. You must add aws.greengrass.TokenExchangeService as a dependency of your service. Then it will vend credentials to your AWS client using standard means. The credentials will have the permissions of the IAM Role which is aliased by the IoT Role alias that you've setup.

Cheers,
Michael Dombrowski

AWS
專家
已回答 3 年前
0

Hello Michael,

yes, we studied this and I would say we're fine with that.
For your information, we have set up this GG v2 Component using the aws console by choosing the "Import Lambda function".
This automatically created the recipe below in the end of this message.
It automatically created a dependency to the aws.greengrass.TokenExchangeService, but with VersionRequirement >=1.0.0.
When checking the deployment configuration I find the "aws.greengrass.TokenExchangeService" component listed in version 2.0.3.

Our core device has assigned a certificate that has our "PSNGreengrassCoreTokenExchangeRoleAliasPolicy" attached.
This policy says

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "iot:AssumeRoleWithCertificate",
      "Resource": "arn:aws:iot:eu-west-1:111934086604:rolealias/PSNGreengrassCoreTokenExchangeRoleAlias"
    }
  ]
}

This role alias points to the IAM role "PSNGreengrassV2TokenExchangeRole" where we frustratingly allowed all s3 actions on all ressources in our attached policy "PSNGreengrassV2TokenExchangeRoleAccess" and ensured the trusts "The identity provider(s) credentials.iot.amazonaws.com
arn:aws:iam::111934086604:role/cloudops"

What are we missing here to read s3 from our IoT component?

BTW - maybe this relates: I just tried to follow the "Hello world" example from the documentation, to get a minimum running. I created a "artifacts" folder in s3, added my hello_world.py to that folder, used aws console (GG v2) to create a new component choosing "Enter recipe as JSON", changed the example artifact uri to my recently uploaded hello_world.py. When clicking "Create Component" i get the error "Invalid Input: Encountered following errors in Artifacts: {s3://dil-sc-lev-s3-d8c4cbf1/artifacts/hello_world.py = Specified artifact resource cannot be accessed}". Does that relate to my initial issue?

added P.S.: I just realized that out IoT configuration is in region "Ireland eu-west-1" while the s3 bucket is create in region "Frankfurt eu-central-1". Can this be the root cause for our issue?

{
  "RecipeFormatVersion": "2020-01-25",
  "ComponentName": "boto3_s3_test",
  "ComponentVersion": "1.1.2",
  "ComponentType": "aws.greengrass.lambda",
  "ComponentDescription": "missing \"import sys\" added",
  "ComponentPublisher": "AWS Lambda",
  "ComponentSource": "arn:aws:lambda:eu-west-1:111934086604:function:boto3_s3_test:4",
  "ComponentConfiguration": {
    "DefaultConfiguration": {
      "lambdaExecutionParameters": {
        "EnvironmentVariables": {}
      },
      "containerParams": {
        "memorySize": 16000,
        "mountROSysfs": false,
        "volumes": {},
        "devices": {}
      },
      "containerMode": "GreengrassContainer",
      "timeoutInSeconds": 3,
      "maxInstancesCount": 100,
      "inputPayloadEncodingType": "json",
      "maxQueueSize": 1000,
      "pinned": true,
      "maxIdleTimeInSeconds": 60,
      "statusTimeoutInSeconds": 60,
      "pubsubTopics": {}
    }
  },
  "ComponentDependencies": {
    "aws.greengrass.LambdaLauncher": {
      "VersionRequirement": ">=1.0.0",
      "DependencyType": "HARD"
    },
    "aws.greengrass.TokenExchangeService": {
      "VersionRequirement": ">=1.0.0",
      "DependencyType": "HARD"
    },
    "aws.greengrass.LambdaRuntimes": {
      "VersionRequirement": ">=1.0.0",
      "DependencyType": "SOFT"
    }
  },
  "Manifests": [
    {
      "Platform": {},
      "Lifecycle": {},
      "Artifacts": [
        {
          "Uri": "greengrass:lambda-artifact.zip",
          "Digest": "fx7d3M30aF/y5Q/8F0N2qpJjJK7HAAZGvZ/kXo7Apjc=",
          "Algorithm": "SHA-256",
          "Unarchive": "ZIP",
          "Permission": {
            "Read": "OWNER",
            "Execute": "NONE"
          }
        }
      ]
    }
  ],
  "Lifecycle": {
    "startup": {
      "requiresPrivilege": true,
      "script": "{aws.greengrass.LambdaLauncher:artifacts:path}/lambda-launcher start"
    },
    "setenv": {
      "AWS_GREENGRASS_LAMBDA_CONTAINER_MODE": "{configuration:/containerMode}",
      "AWS_GREENGRASS_LAMBDA_ARN": "arn:aws:lambda:eu-west-1:111934086604:function:boto3_s3_test:4",
      "AWS_GREENGRASS_LAMBDA_FUNCTION_HANDLER": "lambda_function.lambda_handler",
      "AWS_GREENGRASS_LAMBDA_ARTIFACT_PATH": "{artifacts:decompressedPath}/lambda-artifact",
      "AWS_GREENGRASS_LAMBDA_CONTAINER_PARAMS": "{configuration:/containerParams}",
      "AWS_GREENGRASS_LAMBDA_STATUS_TIMEOUT_SECONDS": "{configuration:/statusTimeoutInSeconds}",
      "AWS_GREENGRASS_LAMBDA_ENCODING_TYPE": "{configuration:/inputPayloadEncodingType}",
      "AWS_GREENGRASS_LAMBDA_PARAMS": "{configuration:/lambdaExecutionParameters}",
      "AWS_GREENGRASS_LAMBDA_RUNTIME_PATH": "{aws.greengrass.LambdaRuntimes:artifacts:decompressedPath}/runtime/",
      "AWS_GREENGRASS_LAMBDA_EXEC_ARGS": "[\"python3.7\",\"-u\",\"/runtime/python/lambda_runtime.py\",\"--handler=lambda_function.lambda_handler\"]",
      "AWS_GREENGRASS_LAMBDA_RUNTIME": "python3.7"
    },
    "shutdown": {
      "requiresPrivilege": true,
      "script": "{aws.greengrass.LambdaLauncher:artifacts:path}/lambda-launcher stop; {aws.greengrass.LambdaLauncher:artifacts:path}/lambda-launcher clean"
    }
  }
}

Edited by: Dirk-R on May 3, 2021 12:43 PM

Dirk-R
已回答 3 年前
0

Dear all,

thank for the support!
Meanwhile I figured out that our problem was that the AWS IoT Service is located in region "Ireland (eu-west-1)" while the s3 bucket is located in region "Frankfurt (eu-central-1)".

It seems that this cross region access is not supported!

Reading file from an s3 Bucket located in region "Ireland (eu-west-1)" worked fine! (both bucket have the same permission settings)

Kind regards,
Dirk

Dirk-R
已回答 3 年前

您尚未登入。 登入 去張貼答案。

一個好的回答可以清楚地回答問題並提供建設性的意見回饋,同時有助於提問者的專業成長。

回答問題指南