AWS Greengrass - Boto3 credentials

0

Hi there,

I am not sure if my problem is related to either an AWS Greengrass or an AWS IoT Core, so forgive me before hand in any case.

To summarize what is happening, I am using the boto3 framework within a AWS Greengrass native component, since I need to send some files to a bucket S3 in determined conditions... but it only works if I include the keys directly in code as it is shown below:
s3 = boto3.client(service_name='s3', region_name='eu-west-1',
aws_access_key_id="XXXXXXXXXXXXXXXXXXXX",
aws_secret_access_key="XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX")

If not, the component breaks in the middle of the boto3 upload operation. The AWS Greengrass part is working perfectly, since I am able to see data in the MQTT tester, but the other part does not work if I am not including that "work around".

That is something I do not like at all, because is extremely insecure. So, is there another way to do this? Do I have to link the boto3 with the device AWS Greengrass credentials in some way? Is there any guide you have I can follow?

Thanks beforehand, any help would be appreciatted.

Looking forward to hearing from you.

Kind regards.
Daniel.

asked 3 years ago656 views
21 Answers
1

Hi Daniel,
Have you included a dependency on aws.greengrass.TokenExchangeService in your component recipe? You must do this in order to use credentials. See this documentation about it: https://docs.aws.amazon.com/greengrass/v2/developerguide/interact-with-aws-services.html

Cheers,
Michael

AWS
EXPERT
answered 3 years ago
  • After adding the below dependency to recipe file, it worked for me and I was able to put objects in S3.

    "ComponentDependencies": { "aws.greengrass.TokenExchangeService": { "VersionRequirement": "^2.0.0", "DependencyType": "HARD" }

    Thanks, Sudhakar

0

Hi Michael,

Thank you very much for replying, and to do so quickly!!

Yes, I read the link you have shared some days ago, and I included those lines already, but sadly, it did not worked. I am attaching both the recipe and the service role permissions so as you might be able to see if I am doing something wrong (perhaps not including in the second json the line "iot:DescribeCertificate"?).

Looking forward to hearing from you.

Kind regards.
Daniel.

answered 3 years ago
0

Hi Daniel,
Please provide the Greengrass log file and the log file from your component to further debug the issue.

Thanks,
Michael

AWS
EXPERT
answered 3 years ago
0

Hi Michael,

Thank you very much again for answering so fast!! You may find attached both greengrass and component logs, and also the code I am using as artifact. It has been reduced only for the s3 bucket procedure in order to be easily ​read.

Looking forward to hearing from you.

Kind regards.
Daniel.

answered 3 years ago
0

Hi Daniel,
The logs that you attached show that TokenExchangeService did fetch credentials properly with an expiration of 1 hour. Your component would have requested and received those credentials. I do not see any sort of error in your component's log file, so I'm not sure what error you are referring to.
Please also provide the recipe for your component.

Thank you,
Michael

AWS
EXPERT
answered 3 years ago
0

Hi Michael,

Thank you very much for answering!! I would like to apologize, since due to the "try and except" sentences, the previous code attached will execute in "a right way", so I am updating all files again with the new test avoiding those parts.

Looking forward to hearing from you.

Kind regards.
Daniel.

answered 3 years ago
0

Hi,
The policy you attached shows permissions for s3-dani bucket and NOT s3-naturgy which is the bucket that your code is using; this would explain why you're getting an AccessDenied error.

Please fix the policy or change the S3 bucket that you're uploading to and it should work.

Cheers,
Michael

AWS
EXPERT
answered 3 years ago
0

Hi Michael,

Do not worry, I fixed the s3 bucket names some days ago, you are watching that because I just uploaded an old file. The test has been made with both buckets pointing to the same place (s3-naturgy). Take a look to the logs, and tell me if you see something that I am missing.

Looking forward to hearing from you.

Kind regards.
Daniel.

answered 3 years ago
0

You have not provided any new logs. From the logs you provided the very clear problem is that you do not have the proper permissions. Ensure that you do not have any AWS credentials on the device from any other source, such as /home/pi/.aws/credentials.
Verify that you are updating the proper IAM role by following the IoT Role Alias that your device is using to the IAM console and verifying the permissions in that IAM role.

AWS
EXPERT
answered 3 years ago
0

Hi Michael,

Thank you for answering again!

First of all: Yes, the logs are related to my post from "Sep 3, 2021 3:00 AM".

Secondly, regarding to what you say:

  1. Ensure that you do not have any AWS credentials on the device from any other source, such as /home/pi/.aws/credentials.
    I found a file in that route, where I can see my credentials:
    pi@raspberrypi:~ $ cat /home/pi/.aws/credentials
    [default]
    aws_access_key_id = XXXXXXXXXXXXXXXXXXXX
    aws_secret_access_key = XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
    Should I erase it?

  2. Verify that you are updating the proper IAM role by following the IoT Role Alias that your device is using to the IAM console and verifying the permissions in that IAM role.
    That is a bit messy, could you explain it better / is there any tutorial I might follow? If that is something related to the policy, you can check the one attached to the account.

More clues that could help:
If I execute the following lines in a python3 console, it works:
pi@raspberrypi:~/Pictures $ python3
Python 3.7.3 (default, Jan 22 2021, 20:04:44)
[GCC 8.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import boto3
AWS libcrypto resolve: searching process and loaded modules
AWS libcrypto resolve: found static aws-lc HMAC symbols
AWS libcrypto resolve: found static aws-lc libcrypto 1.1.1 EVP_MD symbols
>>> src_dir = '/home/pi/Pictures/Humidity.jpg'
>>> s3 = boto3.client(service_name='s3')
>>> s3.upload_file(src_dir, "s3-naturgy", 'pictures/Humidity.jpg')
>>>

S3
Humidity.jpg jpg 6 Sep 2021 11:42:43 AM CEST 277.6 KB Estándar

Thanks beforehand, looking forward to hearing from you.

Kind regards.
Daniel.

answered 3 years ago
0

Hi again Michael,

I was writing that it worked, but it was not, so this reply can be ignored since I did not know how to erase it - Take the previous one into consideration, not this.

Looking forward to hearing from you.

Kind regards.
Daniel.

answered 3 years ago
0

Hi,
Yes you should remove the AWS credentials file as the AWS SDK will read the credentials from there instead of getting them from Greengrass. Completely remove /home/pi/.aws/credentials.

Cheers,
Michael

AWS
EXPERT
answered 3 years ago
0

Hi again Michael,

It is still not working :(. I am attaching again both all results and files from the latest test.

Looking forward to hearing from you.

Kind regards.
Daniel.

answered 3 years ago
0

What is the name of the IAM role which you are modifying to provide S3:PutObject permissions?
I think you must be changing the wrong role since you still have an AccessDenied error.

The default name if you used our automatic provisioning would be GreengrassV2TokenExchangeRole. So if you are not changing the role with that exact name, you are likely changing the wrong role.

AWS
EXPERT
answered 3 years ago
0

Hi Michael,

MichaelDombrowski-AWS wrote:
What is the name of the IAM role which you are modifying to provide S3:PutObject permissions?
I think you must be changing the wrong role since you still have an AccessDenied error.

Forgive me beforehand if I am wrong, but I do not believe it could be something related to permissions, and I will try to explain it further below:
1- The policy was attached in a previous post, and as you can see, the "S3:PutObject" permission has been included.
2- If I open a python3 console, I can send the picture without writing the credentials by executing the commands bellow:
pi@raspberrypi:~ $ python3
Python 3.7.3 (default, Jan 22 2021, 20:04:44)
[GCC 8.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import boto3
AWS libcrypto resolve: searching process and loaded modules
AWS libcrypto resolve: found static aws-lc HMAC symbols
AWS libcrypto resolve: found static aws-lc libcrypto 1.1.1 EVP_MD symbols
>>> src_dir = '/home/pi/Pictures/Humidity.jpg'
>>> s3 = boto3.client(service_name='s3')
>>> s3.upload_file(src_dir, "s3-naturgy", 'pictures/Humidity.jpg')

*Note: This works ONLY if the .aws folder is been created with the user's credentials - we have talked about deleting this previously.

It is important to point If that folder does not exist, the output shown is different:
pi@raspberrypi:~ $ python3
Python 3.7.3 (default, Jan 22 2021, 20:04:44)
[GCC 8.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import boto3
AWS libcrypto resolve: searching process and loaded modules
AWS libcrypto resolve: found static aws-lc HMAC symbols
AWS libcrypto resolve: found static aws-lc libcrypto 1.1.1 EVP_MD symbols
>>> src_dir = '/home/pi/Pictures/Humidity.jpg'
>>> s3 = boto3.client(service_name='s3')
>>> s3.upload_file(src_dir, "s3-naturgy", 'pictures/Humidity.jpg')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/pi/.local/lib/python3.7/site-packages/boto3/s3/inject.py", line 131, in upload_file
extra_args=ExtraArgs, callback=Callback)
File "/home/pi/.local/lib/python3.7/site-packages/boto3/s3/transfer.py", line 279, in upload_file
future.result()
File "/usr/local/lib/python3.7/dist-packages/s3transfer/futures.py", line 106, in result
_return self.coordinator.result()
File "/usr/local/lib/python3.7/dist-packages/s3transfer/futures.py", line 265, in result
_raise self.exception
File "/usr/local/lib/python3.7/dist-packages/s3transfer/tasks.py", line 126, in call
_return self.execute_main(kwargs)
_File "/usr/local/lib/python3.7/dist-packages/s3transfer/tasks.py", line 150, in execute_main
_return_value = self.main(**kwargs)
_File "/usr/local/lib/python3.7/dist-packages/s3transfer/upload.py", line 694, in main
client.put_object(Bucket=bucket, Key=key, Body=body, **extra_args)
_File "/home/pi/.local/lib/python3.7/site-packages/botocore/client.py", line 386, in api_call
_return self.make_api_call(operation_name, kwargs)
_File "/home/pi/.local/lib/python3.7/site-packages/botocore/client.py", line 692, in make_api_call
operation_model, request_dict, request_context)
_File "/home/pi/.local/lib/python3.7/site-packages/botocore/client.py", line 711, in make_request
_return self.endpoint.make_request(operation_model, request_dict)
File "/home/pi/.local/lib/python3.7/site-packages/botocore/endpoint.py", line 102, in make_request
_return self.send_request(request_dict, operation_model)
_File "/home/pi/.local/lib/python3.7/site-packages/botocore/endpoint.py", line 132, in send_request
request = self.create_request(request_dict, operation_model)
File "/home/pi/.local/lib/python3.7/site-packages/botocore/endpoint.py", line 116, in create_request
operation_name=operation_model.name)
File "/home/pi/.local/lib/python3.7/site-packages/botocore/hooks.py", line 356, in emit
_return self.emitter.emit(aliased_event_name, **kwargs)
File "/home/pi/.local/lib/python3.7/site-packages/botocore/hooks.py", line 228, in emit
_return self.emit(event_name, kwargs)
_File "/home/pi/.local/lib/python3.7/site-packages/botocore/hooks.py", line 211, in emit
response = handler(**kwargs)
File "/home/pi/.local/lib/python3.7/site-packages/botocore/signers.py", line 90, in handler
return self.sign(operation_name, request)
File "/home/pi/.local/lib/python3.7/site-packages/botocore/signers.py", line 162, in sign
auth.add_auth(request)
File "/home/pi/.local/lib/python3.7/site-packages/botocore/crt/auth.py", line 32, in add_auth
raise NoCredentialsError()
botocore.exceptions.NoCredentialsError: Unable to locate credentials

So, if I am able to send once without writing my credentials, independently of the method (in this case, using the python console), the IAM permissions should not be the problem, but I insist, I am not sure.

Thanks beforehand for answering and for your patience. Looking forward to hearing from you.

Kind regards.
Daniel.

answered 3 years ago
0

Hi,
AccessDenied can really only be attributed to lacking the proper permissions. I understand that you say the role has S3:PutObject, which is why I asked you for the role name.

Please provide me with the role name which you are editing and which you shared the policy with me.

You should not have any AWS credentials on the device stored in .aws. When running from Greengrass, Greengrass provides your component credentials through the ECS Credentials Provider. Therefore, it is expected that your application should fail if you do not run it inside of Greengrass (when you manually use the command line). If you include credentials in the .aws directory, your application may use those credentials instead of the ones provided by Greengrass since the ECS provider is one of the last things that boto3 tries to use.

Michael

AWS
EXPERT
answered 3 years ago
0

Hi Michael,

My user is named as "naturgy", and it does not have a role itself, it has an embedded policy (know in AWS as "inline policy") named "naturgy_policy", which only affects to my user. And within it, all lines shared before in the "policy" file can be found.

I do not know if that is what you wanted, let me know otherwise.

Thanks beforehand, looking forward to hearing from you.

Kind regards.
Daniel.

answered 3 years ago
0

So the reason why it isn't working is because you have not given Greengrass the required policy. I assume that you used the Greengrass automatic setup, therefore the IAM role is called GreengrassV2TokenExchangeRole. You need to edit the role GreengrassV2TokenExchangeRole to give it the required S3:PutObject permissions.
Once you do that it will work.

Michael

AWS
EXPERT
answered 3 years ago
0

Hi Michael,

Bingo!! That was the key, it is working now!!! So, only for me to understand properly and not having an "it works" as a final feedback, we have to give permissions to that role, which will be used by the boto3 for the credentials exchange, and it needs the permissions to fullfil the action, am I right? If that is true, do I really need to include S3:PutObject in the user policy?

Looking forward to hearing from you and many thanks for your support and for your patience!!!

Kind regards
Daniel.

answered 3 years ago
0

I'm very glad that worked.

No, your IAM user has nothing to do with anything, so it does not need S3:PutObject. For Greengrass, only the Greengrass role needs the appropriate permisisons.

Cheers,
Michael

AWS
EXPERT
answered 3 years ago
0

Hi Michael,

Perfect, understood!! Again, thank you very much for your support and your patience.

Kind regards.
Daniel.

answered 3 years ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions