Questions tagged with Amazon Elastic File System

Content language: English

Sort by most recent

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Mounting a file system to Github actions.

I am attempting to shift a workflow into the cloud. So that I can keep costs down I am using Github actions to do some Mac specific stuff - build macOS install packages. This is done using a tool - autopkg. Autopkg caches the application download and package between runs. Unfortunately this cache is too large for Github and can include files too big for Github actions. Package building has to happen on a Mac. Since the next step is to do some uploading of the packages to multiple sites and run some Python to process th built packages and this can run on a small Linux EC2 instance it seems the logical solution is to provide a file system from AWS that autopkg can use as a cache and mount it on every Github action run. I have been tearing my hair out attempting this with either S3 and S3fs or EFS and can't seem to wrap my head around how all the bits hang together. For testing I tried the mount native on my Mac and I tried it in amazonlinux and Debian Docker containers. I'm figuring the solution will be using NFS or efs-utils to mount an EFS volume but I can't get it working. In a Debian container using efs-utils I got close but it seems I can't get the DNS name to resolve. The amazonlinux Docker container was too basic to get efs-utils to work. I also got the aws command line tool installed but it runs in to the same DNS resolution problems. I tried connecting the underlying Mac to an AWS VPN in the same VPC as the file system. still had the same DNS problems. Any help would be appreciated. I've just updated the question with some more stuff I have tried.
0
answers
0
votes
12
views
asked 25 days ago

Unable to write to EC2 instance running Apache on shared EFS

I have an Auto-scaling group with the following EFS setup in the Launch Template: ``` sudo yum install -y amazon-efs-utils sudo mount -t efs fs-0f13ef1378a09e59c:/ /efs sudo mount -t efs fs-0f13ef1378a09e59c:/html /var/www/html sudo mount -t efs fs-0f13ef1378a09e59c:/test /home/test # Reference: https://stackoverflow.com/questions/57260276/using-same-aws-efs-to-share-multiple-directories ``` I have PHP8.0 and Apache set up this guide: https://gist.github.com/syad9000/dbc855a11b306cb454b283a83fe479f2. This creates the source AMI that I use to generate two EC2 instances in an Auto-scaling group that uses an EFS to sync the /var/home/html/test folder and the /home/test folder. I have Apache set up to serve port 80 to the /var/www/html/test folder. I am using an ALB to redirect requests to the qualified domain name to the target group I created. I can serve files such as /index.html or /index.php fine. PHP code is working in the browser. My problem is that I am trying to create an API that will run a stored shell script based on GET command. For example, I do a GET request to /index.php?build=true. My PHP script is trying to execute the /var/www/html/test/build.sh script. I get an error message stating: ``` <br /> <b>Warning</b>: file_put_contents(/var/www/html/test/.build/error.log): Failed to open stream: Permission denied in <b>/var/www/html/test/_resources/php/functions.php</b> on line <b>11</b><br /> ``` When I log in via the console I can run the script with no error. When I try to run using either the web browser or curl I get the above error. When I log into the console and run as the apache user using: ``` sudo su -s /bin/bash -c '/var/www/html/build.sh' apache ``` Where is the permissions issue originating in the Apache config? The EFS or something else like the ALB?
1
answers
0
votes
60
views
Bryan C
asked a month ago

Why is my EFS File system policy blocking Fargate from mounting the EFS even though it includes the Task Execution Role arn?

I'm currently using an EFS mounted on a Fargate task. The task uses roles CustomECSTaskExecutionAgent for task execution and CustomECSTaskAgent for the task. With no file system policy in place, Fargate mounts fine and my task is able to read/write to the EFS. However, my company requires a File System Policy for each EFS so I added the following ``` { "Version": "2012-10-17", "Id": "efs-statement-8e30733a-a93f-414f-b5b6-284bd5a02c0a", "Statement": [ { "Sid": "efs-statement-7c9d03e6-379b-422e-afe6-4d92e7ff4303", "Effect": "Allow", "Principal": { "AWS": [ "arn:aws:iam::<accountid>:role/CustomECSTaskAgent", "arn:aws:iam::<accountid>:role/CustomECSTaskExecutionAgent", "arn:aws:iam::<accountid>:role/CustomEC2Agent" ] }, "Action": "elasticfilesystem:*", "Resource": "arn:aws:elasticfilesystem:us-east-1:<accountid>:file-system/fs-id" } ] } ``` With this policy Fargate is not able to mount the drive, I get the following error: `ResourceInitializationError: failed to invoke EFS utils commands to set up EFS volumes: stderr: b'mount.nfs4: access denied by server while mounting fs-id.efs.us-east-1.amazonaws.com:/' : unsuccessful EFS utils command execution; code: 32` If I add the following statement to the policy then Fargate is able to mount the drive but the task fails immediately because it is not able to read/write. I cannot keep the below statement because it is too permissive and I'd like to know what Principal I need for 1. Fargate to mount successfully 2. For my task to read/write ``` { "Sid": "efs-statement-7c9d03e6-379b-422e-afe6-4d92e7ff4303", "Effect": "Allow", "Principal": { "AWS": "*" }, "Action": "elasticfilesystem:ClientMount", "Resource": "arn:aws:elasticfilesystem:us-east-1:<accountid>:file-system/fs-id" } ```
1
answers
0
votes
42
views
Olly
asked a month ago

FS "does not have mount targets created in all availability zones the function will execute in" (but it does)

I'm getting this error > Resource handler returned message: "EFS file system arn:aws:elasticfilesystem:us- west-2:999999999999:file- system/fs-0389f6268bc5e61a8 referenced by access point arn:aws:elasticfilesystem:us- west-2:999999999999:access- point/fsap-0ee6de7a6069fda4a does not have mount targets created in all availability zones the function will execute in. Please create EFS mount targets in availability zones where the function has a corresponding subnet provided. (Service: Lambda, Status Code: 400, Request ID: 5c4b694a-ba28-4a9f-8e1a-f1fde134f398)" (RequestToken: 85c51e18-d780-d8df-44d2-54c1194cea9f, HandlerErrorCode: InvalidRequest) But I don't understand because clearly I have setup the 3 AZs. Here's my template in its entirety: ``` AWSTemplateFormatVersion: 2010-09-09 Description: >- pouchdb-sam-app Transform: - AWS::Serverless-2016-10-31 Parameters: FileSystemName: Type: String Default: TestFileSystem Resources: MountTargetVPC: Type: AWS::EC2::VPC Properties: CidrBlock: 172.31.0.0/16 EnableDnsHostnames: True EnableDnsSupport: True MountTargetSubnetOne: Type: AWS::EC2::Subnet Properties: CidrBlock: 172.31.1.0/24 VpcId: !Ref MountTargetVPC AvailabilityZone: !Sub "${AWS::Region}a" MountTargetSubnetTwo: Type: AWS::EC2::Subnet Properties: CidrBlock: 172.31.2.0/24 VpcId: !Ref MountTargetVPC AvailabilityZone: !Sub "${AWS::Region}b" MountTargetSubnetThree: Type: AWS::EC2::Subnet Properties: CidrBlock: 172.31.3.0/24 VpcId: !Ref MountTargetVPC AvailabilityZone: !Sub "${AWS::Region}c" FileSystemResource: Type: 'AWS::EFS::FileSystem' Properties: PerformanceMode: maxIO Encrypted: true FileSystemTags: - Key: Name Value: !Ref FileSystemName FileSystemPolicy: Version: "2012-10-17" Statement: - Effect: "Allow" Action: - "elasticfilesystem:ClientMount" Principal: AWS: "*" MountTargetResource1: Type: AWS::EFS::MountTarget Properties: FileSystemId: !Ref FileSystemResource SubnetId: !Ref MountTargetSubnetOne SecurityGroups: - !GetAtt MountTargetVPC.DefaultSecurityGroup MountTargetResource2: Type: AWS::EFS::MountTarget Properties: FileSystemId: !Ref FileSystemResource SubnetId: !Ref MountTargetSubnetTwo SecurityGroups: - !GetAtt MountTargetVPC.DefaultSecurityGroup MountTargetResource3: Type: AWS::EFS::MountTarget Properties: FileSystemId: !Ref FileSystemResource SubnetId: !Ref MountTargetSubnetThree SecurityGroups: - !GetAtt MountTargetVPC.DefaultSecurityGroup AccessPointResource: Type: 'AWS::EFS::AccessPoint' Properties: FileSystemId: !Ref FileSystemResource PosixUser: Uid: "1000" Gid: "1000" RootDirectory: CreationInfo: OwnerGid: "1000" OwnerUid: "1000" Permissions: "0777" Path: "/data" getAllItemsFunction: Type: AWS::Serverless::Function Properties: Handler: src/handlers/get-all-items.getAllItemsHandler Runtime: nodejs16.x Architectures: - x86_64 MemorySize: 128 Timeout: 100 Events: Api: Type: Api Properties: Path: /{proxy+} Method: ANY VpcConfig: SecurityGroupIds: - !GetAtt MountTargetVPC.DefaultSecurityGroup SubnetIds: [ !Ref MountTargetSubnetOne, !Ref MountTargetSubnetTwo, !Ref MountTargetSubnetThree ] FileSystemConfigs: - Arn: !GetAtt AccessPointResource.Arn LocalMountPath: "/mnt/data" Policies: - Statement: - Sid: AWSLambdaVPCAccessExecutionRole Effect: Allow Action: - logs:CreateLogGroup - logs:CreateLogStream - logs:PutLogEvents - ec2:CreateNetworkInterface - ec2:DescribeNetworkInterfaces - ec2:DeleteNetworkInterface Resource: "*" - Sid: AmazonElasticFileSystemClientFullAccess Effect: Allow Action: - elasticfilesystem:ClientMount - elasticfilesystem:ClientRootAccess - elasticfilesystem:ClientWrite - elasticfilesystem:DescribeMountTargets Resource: "*" Outputs: WebEndpoint: Description: "API Gateway endpoint URL for Prod stage" Value: !Sub "https://${ServerlessRestApi}.execute-api.${AWS::Region}.amazonaws.com/Prod/" ```
1
answers
0
votes
22
views
Alex1
asked a month ago

Lambda filesystem 'failed to stat'

While running in the `provided.al2` runtime, my lambda, which is written in Rust, is giving the following error: `failed to stat /tmp/myrepo/.git` I'm using the `git2` crate to clone a git repository (which is empty) into the `/tmp` directory. The clone process seems to be succeeding. The files are created on the filesystem from what I can tell, but when trying to verify the clone, `stat` isn't able to see the files or get information about them. I _suspect_ this is because `/tmp` may actually be a shared directory under the hood, which may be the source of the problem. While troubleshooting, I also tried attaching an EFS share, which produced the same results. I was able to clone the repository, write the files to disk (in this case an empty repository with `.git` folder), but cannot call `stat` on them. I'm almost to the point of changing my approach, but wanted to see if my suspicions could be confirmed and/or if there's a way to resolve this without rewriting the application. **Update** To deploy the lambda, I've zipped the `bootstrap` binary, uploaded to S3, and deployed via CloudFormation template. Below is the CloudFormation template I've used: ```yaml AWSTemplateFormatVersion: "2010-09-09" Parameters: LambdaBuildZipFile: Type: String VpcSecurityGroupIds: Type: String VpcSubnetIds: Type: String LambdaSourceBucketName: Type: String PostgresLibpqLayerArn: Type: String Resources: LambdaFunctionRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: Allow Principal: Service: lambda.amazonaws.com Action: sts:AssumeRole ManagedPolicyArns: - arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole - arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole - arn:aws:iam::aws:policy/AWSCodeCommitFullAccess - arn:aws:iam::aws:policy/AmazonRoute53FullAccess - arn:aws:iam::aws:policy/AmazonRDSFullAccess Lambda: Type: AWS::Lambda::Function Properties: Handler: main Runtime: provided.al2 Role: !GetAtt LambdaFunctionRole.Arn Timeout: 300 VpcConfig: SecurityGroupIds: !Split - "," - !Ref VpcSecurityGroupIds SubnetIds: !Split - "," - !Ref VpcSubnetIds Layers: - !Ref PostgresLibpqLayerArn Code: S3Bucket: !Ref LambdaSourceBucketName S3Key: !Ref LambdaBuildZipFile ```
1
answers
0
votes
22
views
asked a month ago