Help with connecting Directus to S3 bucket

0

We are using Directus CMS to modify our database records and tables (AWS RDS), as well as upload files (AWS S3).

We have Directus self-hosted on AWS EC2, running as a Docker container.

We are able to update the database via AWS RDS, but we are having issues with accessing our S3 bucket.

There are access policies enabled on S3, which allows read/write access to the specific IP addresses required.

We have tried searching for others experiencing issues (example), but the issues don't align with ours / we've already tried the recommendations.

We can get Directus to connect to local machine with no issues, but it doesn't connect to S3 and there doesn't appear to be any authentication errors or logs that would point us in the direction of the problem.

Hoping someone here may have experience with connecting Directus to S3 and know of any specific policy, port, or other configuration/IAM that may be required.

Does anyone have experience with connecting Directus to S3, with ideas around what we could try to troubleshoot or even identify the cause of the issue?

  • have you checked/enabled CloudTrail to be able to see and trace the issue ?

  • Is it a network connection problem or a permissions problem?

1 Answer
0

I have no experience with Directus but have recently set up some other applications to use S3 so some general advice:

First, make sure that your bucket policy is empty. This sounds like a bad idea from a security perspective but the default is that unless you specifically grant access to the bucket in some other way then access is denied. So this is ok for starters (again, for security) because in general only you can grant access to the bucket from within your account. Creating a complex bucket policy (with IP address restrictions and so on) up front will complicate troubleshooting - you can always add it back in afterwards.

Second, create a S3 Gateway Endpoint in your VPC. It simplifies the routing and normally saves you data transfer costs. Worst case is that it doesn't save you any costs but it won't cost you any more. So you should just do this.

Third, create an IAM role that has permissions to access the bucket; then assign that role to the instance that you're using. The policy does not have to be complex - again, simple is better for testing. It should be something like:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "s3:ListAllMyBuckets",
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": "s3:*",
            "Resource": [
                "arn:aws:s3:::BUCKETNAME/*",
                "arn:aws:s3:::BUCNETNAME"
            ]
        }
    ]
}

Fourth, on the instance try using the AWS CLI tools to access the bucket. Simple commands like aws s3 ls s3://BUCKETNAME/ and aws s3 cp LOCALFILENAME s3://BUCKETNAME/ are good tests. This will prove that the instance has the right access and that the network routing is working.

Finally, make sure that your container can access the EC2 Instance Metadata service. Instructions for testing this are in the documentation.

After that, if things are still not working, you'll need to enable more logging in Directus.

profile pictureAWS
EXPERT
answered 10 months ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions