By using AWS re:Post, you agree to the Terms of Use
/Amazon Elastic File System/

Questions tagged with Amazon Elastic File System

Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Cannot import numba python package inside AWS Lambda function

I have an AWS Lambda function with Python 3.7 runtime and AWSLambda-Python37-SciPy1x lambda layer added. I have my python dependencies (such as numba) installed on a EFS directory that I add to the path that the Lambda function can access. (I installed numba with ```pip install --target . numba==0.47.0```, for version 0.47.0, for example) I can inport numpy, but trying to import numba gives the following error: ``` Runtime.ImportModuleError: Unable to import module 'lambda_function': Numba could not be imported. If you are seeing this message and are undertaking Numba development work, you may need to re-run: python setup.py build_ext --inplace (Also, please check the development set up guide http://numba.pydata.org/numba-doc/latest/developer/contributing.html.) If you are not working on Numba development: Please report the error message and traceback, along with a minimal reproducer at: https://github.com/numba/numba/issues/new If more help is needed please feel free to speak to the Numba core developers directly at: https://gitter.im/numba/numba Thanks in advance for your help in improving Numba! The original error was: 'cannot import name '_typeconv' from 'numba.typeconv' (/mnt/access/numba/typeconv/__init__.py)' -------------------------------------------------------------------------------- If possible please include the following in your error report: sys.executable: /var/lang/bin/python3.7 Traceback (most recent call last): ``` I tried with the numba versions 0.45.0, 0.47.0, 0.48.0, 0.49.0, 0.49.1, 0.55.1 and the error is the same in all versions. I saw this response, but when I delete de numba files from the EFS directory, i get a No module named 'numba' error, which indicates that there is no other numba version installed. Below is the full code of the AWS Lambda function: ``` import sys sys.path.append("/mnt/access") import os, json, sys import numba def lambda_handler(event, context): return { 'statusCode': 200, 'body': json.dumps('All OK') } ``` Is there a specific way I should be installing numba? *ps: I describe this exact problem in [this issue on the numba repository](https://github.com/numba/numba/issues/7975), but unfortunately the numba mainteiners do not have familiarity with aws lambda, so I coundn't get help.
1
answers
0
votes
6
views
asked a month ago

Design questions on asg, backup restore, ebs and efs

Hi experts, We are designing to deploy a BI application in AWS. We have a default setting to repave the ec2 instance every 14 days which means it will rebuild the whole cluster instances with services and bring back it to last known good state. We want to have a solution with no/minimal downtime. The application has different services provisioned on different ec2 instances. First server will be like a main node and rest are additional nodes with different services running on them. We install all additional nodes same way but configure services later in the code deploy. 1. Can we use asg? If yes, how can we distribute the topology? Which mean out of 5 instances, if one server repaves, then that server should come up with the same services as the previous one. Is there a way to label in asg saying that this server should configure as certain service? 1. Each server should have its own ebs volume and stores some data in it. - what is the fastest way to copy or attach the ebs volume to new repaves server without downtime? 2. For shared data we want to use EFS 3. for metadata from embedded Postgres - we need to take a backup periodically and restore after repave(create new instance with install and same service) - how can we achieve this without downtime? We do not want to use customized AMI as we have a big process for ami creation and we often need to change it if we want to add install and config in it. Sorry if this is a lot to answers. Some guidance is helpful.
1
answers
0
votes
6
views
asked a month ago

EFS performance/cost optimization

We have a relatively small EFS of about 20G in burst mode, it was setup about 2 months ago and there were not much performance issue, utilization are always under 2% even under our max load usage (only for a very short period of time) And yesterday, we suddenly noticed that our site are not responding, but our server have very minimal CPU loads. Then we saw that the utilization of the EFS suddenly went up to 100%, digging deeper, it seems that we had been slowing and consistently consuming the original 2.3T BurstCreditBalance for the past few weeks, and it went to zero yesterday. Problems 1. The EFS monitoring tab provided completely useless information and does NOT even include the report of BurstCreditBalance, we had to find it in CloudWatch ourselves. 2. The Throughput utilization is misleading that we are actually slowly using up the credits, but there are no indications of such 3. We had since switched to Provisioned mode at 10MBps in the meantime as we're not really sure how to get the correct throughput number we need for our system. CloudWatch is showing 1s average max value of MeteredIOBytes 7.3k, DataReadIOBytes 770k, DataWriteIOBytes 780k. 4. we're seeing BurstCreditBalance build up much quicker (w 10MBps Provisioned) than we had used previously (in Burst). However, when we switched to 2MBps Provisioned, our system is visibly throttled even though there are 1T BurstCreditBalance, why? Main questions 1. How to properly define a Provisioned rate that is not too excessive, but not limiting our system when it needs to use it based on the CloudWatch metrics? 2. Ideally, we'd like to use Burst as that fits better, but with just 20GB, we don't seem to accumulate any BurstCreditBalance
1
answers
0
votes
6
views
asked a month ago

How to access and/or mount Amazon public datasets to EC2

I have an EC2 instance running in us-east-1 that needs to be able to access/manipulate data available in the [KITTI Vision Benchmark public dataset](https://registry.opendata.aws/kitti/). I'd like to make this data available to the instance, but would also like to be able to reuse it with other instances in the future (more like a mounted S3 approach). I understand that I can view the bucket and recursively download the data to a local folder using AWS cli from within the instance: `aws s3 ls --no-sign-request s3://avg-kitti/` `aws s3 sync s3://avg-kitti/` or `aws s3 cp s3://avg-kitti/ . --recursive` However, this feels like a brute force approach and would likely require me to increase my EBS volume size... and would limit my reuse of this data elsewhere (unless I was to snapshot and reuse). I did find some stackoverflow solutions that mentioned some of the open data sets being available as [a snapshot you could copy over and attach as a volume](https://opendata.stackexchange.com/questions/12699/how-can-i-download-aws-public-datasets). But the [KITTI Vision Benchmark public dataset](https://registry.opendata.aws/kitti/) appears to be on S3 so I don't think it would have a snapshot like it would on EBS datasets... That being said, is there an easier way to copy public data over to an existing S3 bucket? and then mount my instance to that? I have played around with S3FS and feel like that might be my best bet, but I am worried about 1) the cost of copying / downloading all data from public bucket to my own 2) best approach for reusing this data on other instances 3) simply not knowing if there's a better/cheaper way to make this data available without downloading or needing to download again in the future.
2
answers
0
votes
6
views
asked 3 months ago

EC2 - Could not set DHCPv4 address: Connection timed out (sa-east-1a)

Our c6i.2xlarge 3-year reserved instance, running for its first 5 days, generated this log entry **_Could not set DHCPv4 address: Connection timed out_** on Jan 28 02:59:51 UTC, followed by **_Failed_** and **_Configured_**. From there on, the machine became unresponsive and AWS finally raised a StatusCheckFailed_Instance at 06:59 UTC. At 09:06 UTC machine was stopped and restarted through the Console. I found these apparently related issues, but still clueless: [CoreOS goes offline on DHCP failure on Amazon VPC](https://github.com/coreos/bugs/issues/2020) [CoreOS on EC2 losing network connection once a day](https://github.com/coreos/bugs/issues/1551) The box is running MySQL 5.7.36 and Memcache 1.5.6 on top of Ubuntu 18.04. I would be thankful if someone could help me identify the **root cause** of this issue, and: 1. Could this be related to ntp-systemd-netif.service ? 2. This instance type has a separate channel for EBS, but with network down, and no customers making requests (no usage logs on the application machine, except the "MySQL connection timeouts"), what would explain a surge on EBS disk reads? CloudWatch graphs below. 3. We have an EFS disk attached to this instance, that started failing at 04:04 UTC _probably_ related to network failure. No errors reported at EFS sa-east São Paulo status page. Jan 28 02:17:01 ip-172-xxx-xxx-xxx CRON[18179]: pam_unix(cron:session): session opened for user root by (uid=0) Jan 28 02:17:01 ip-172-xxx-xxx-xxx CRON[18180]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Jan 28 02:17:01 ip-172-xxx-xxx-xxx CRON[18179]: pam_unix(cron:session): session closed for user root Jan 28 02:29:11 ip-172-xxx-xxx-xxx systemd-networkd[728]: ens5: Configured Jan 28 02:29:11 ip-172-xxx-xxx-xxx systemd-timesyncd[623]: Network configuration changed, trying to establish connection. Jan 28 02:29:12 ip-172-xxx-xxx-xxx systemd-timesyncd[623]: Synchronized to time server 169.254.169.123:123 (169.254.169.123). Jan 28 02:29:12 ip-172-xxx-xxx-xxx systemd[1]: Started ntp-systemd-netif.service. Jan 28 02:59:51 ip-172-xxx-xxx-xxx systemd-timesyncd[623]: Network configuration changed, trying to establish connection. Jan 28 02:59:51 ip-172-xxx-xxx-xxx systemd-networkd[728]: **ens5: Could not set DHCPv4 address: Connection timed out** Jan 28 02:59:51 ip-172-xxx-xxx-xxx systemd-networkd[728]: **ens5: Failed** Jan 28 02:59:51 ip-172-xxx-xxx-xxx systemd-networkd[728]: **ens5: Configured** Jan 28 02:59:51 ip-172-xxx-xxx-xxx systemd-timesyncd[623]: Synchronized to time server 169.254.169.123:123 (169.254.169.123). Jan 28 02:59:51 ip-172-xxx-xxx-xxx systemd-timesyncd[623]: Network configuration changed, trying to establish connection. Jan 28 02:59:51 ip-172-xxx-xxx-xxx systemd-timesyncd[623]: Synchronized to time server 169.254.169.123:123 (169.254.169.123). Jan 28 03:00:01 ip-172-xxx-xxx-xxx systemd[1]: **Started ntp-systemd-netif.service.** Jan 28 03:01:21 ip-172-xxx-xxx-xxx systemd-udevd[503]: seq 16407 '/kernel/slab/proc_inode_cache/cgroup/proc_inode_cache(4935:ntp-systemd-netif.service)' is taking a long time Jan 28 03:01:28 ip-172-xxx-xxx-xxx systemd-udevd[503]: seq 16408 '/kernel/slab/:A-0000040/cgroup/pde_opener(4935:ntp-systemd-netif.service)' is taking a long time Jan 28 03:01:34 ip-172-xxx-xxx-xxx systemd-udevd[503]: seq 16409 '/kernel/slab/kmalloc-32/cgroup/kmalloc-32(4935:ntp-systemd-netif.service)' is taking a long time Jan 28 03:01:40 ip-172-xxx-xxx-xxx systemd-udevd[503]: seq 16410 '/kernel/slab/kmalloc-4k/cgroup/kmalloc-4k(4935:ntp-systemd-netif.service)' is taking a long time Jan 28 03:17:03 ip-172-xxx-xxx-xxx CRON[18284]: pam_unix(cron:session): session opened for user root by (uid=0) Jan 28 03:17:12 ip-172-xxx-xxx-xxx CRON[18285]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Jan 28 03:19:34 ip-172-xxx-xxx-xxx snapd[6419]: autorefresh.go:530: Cannot prepare auto-refresh change: Post https://api.snapcraft.io/v2/snaps/refresh: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 03:19:34 ip-172-xxx-xxx-xxx CRON[18284]: pam_unix(cron:session): session closed for user root Jan 28 03:28:44 ip-172-xxx-xxx-xxx snapd[6419]: stateengine.go:149: state ensure error: Post https://api.snapcraft.io/v2/snaps/refresh: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 03:36:35 ip-172-xxx-xxx-xxx systemd[1]: Starting Ubuntu Advantage Timer for running repeated jobs... Jan 28 04:01:18 ip-172-xxx-xxx-xxx systemd[1]: **Started ntp-systemd-netif.service.** Jan 28 04:03:09 ip-172-xxx-xxx-xxx systemd-udevd[503]: seq 16496 '/radix_tree_node(4961:ntp-systemd-netif.service)' is taking a long time Jan 28 04:04:00 ip-172-xxx-xxx-xxx kernel: nfs: server fs-0ac698ea1xxxxxxxx.efs.sa-east-1.amazonaws.com not responding, timed out Jan 28 04:06:13 ip-172-xxx-xxx-xxx kernel: nfs: server fs-0ac698ea1xxxxxxxx.efs.sa-east-1.amazonaws.com not responding, timed out Jan 28 04:06:26 ip-172-xxx-xxx-xxx kernel: nfs: server fs-0ac698ea1xxxxxxxx.efs.sa-east-1.amazonaws.com not responding, timed out Jan 28 04:09:14 ip-172-xxx-xxx-xxx kernel: nfs: server fs-0ac698ea1xxxxxxxx.efs.sa-east-1.amazonaws.com not responding, timed out Jan 28 04:09:26 ip-172-xxx-xxx-xxx kernel: nfs: server fs-0ac698ea1xxxxxxxx.efs.sa-east-1.amazonaws.com not responding, timed out Jan 28 04:12:15 ip-172-xxx-xxx-xxx kernel: nfs: server fs-0ac698ea1xxxxxxxx.efs.sa-east-1.amazonaws.com not responding, timed out Jan 28 04:12:26 ip-172-xxx-xxx-xxx kernel: nfs: server fs-0ac698ea1xxxxxxxx.efs.sa-east-1.amazonaws.com not responding, timed out Jan 28 04:12:36 ip-172-xxx-xxx-xxx kernel: nfs: server fs-0ac698ea1xxxxxxxx.efs.sa-east-1.amazonaws.com not responding, timed out Jan 28 04:15:15 ip-172-xxx-xxx-xxx kernel: nfs: server fs-0ac698ea1xxxxxxxx.efs.sa-east-1.amazonaws.com not responding, timed out Jan 28 04:15:26 ip-172-xxx-xxx-xxx kernel: nfs: server fs-0ac698ea1xxxxxxxx.efs.sa-east-1.amazonaws.com not responding, timed out Jan 28 04:15:34 ip-172-xxx-xxx-xxx kernel: nfs: server fs-0ac698ea1xxxxxxxx.efs.sa-east-1.amazonaws.com not responding, timed out Jan 28 04:16:39 ip-172-xxx-xxx-xxx sshd[4657]: pam_unix(sshd:session): session closed for user ubuntu Jan 28 04:17:30 ip-172-xxx-xxx-xxx systemd-logind[974]: Failed to abandon session scope, ignoring: Connection timed out Jan 28 04:18:00 ip-172-xxx-xxx-xxx systemd-logind[974]: Removed session 27. [Cloud Watch Graphs](https://ibb.co/7tydsyQ) Thanks!
0
answers
0
votes
6
views
asked 4 months ago

failed to invoke EFS utils commands to set up EFS volumes: stderr: Failed to resolve "fs-03ec98cf2f1d81580.efs.us-east-1.amazonaws.com"

Hi, I'm following the APPLICATION MODERNIZATION WITH AWS AND DOCKER workshop steps as mentioned and in module 2, Step 1 - Section: ***Deploy to Amazon ECS***, really got stuck when deploying application to AWS ECS. When I execute the "*docker compose up*" command as mentioned, it starts deploying resources and The Docker Compose CLI first concatenates the compose files passed through and generates an opinionated AWS CloudFormation template and deploys it to create the AWS resources defined in our compose file. After few mins , its shows in cloudformation "DELETE_IN_PROGRESS" and when all the resources get decommissioned it throws an error message saying "*DbService TaskFailedToStart: ResourceInitializationError: failed to invoke EFS utils commands to set up EFS volumes: stderr: Failed to resolve '*fs-03ec98cf2f1d81580.efs.us-east-1.amazonaws.com*' - check that your file system ID is correct.*" When I checked in the EFS console, the EFS was created successfully. Not sure why is this causing an issue? **GitHub Repo:** https://github.com/spawar1991/docker-compose-ecs-sample ------------------------------------------------------------------------- Workshop URL: https://docker.awsworkshop.io/31_docker_ecs_integration/10_migrate_to_ecs.html Can someone know how to mitigate this error and why is it causing an issue? Thanks in advance.
2
answers
0
votes
26
views
asked 4 months ago

Deny EFS actions to all but specific user

I'm trying to deny EFS actions to all users, except for one specific user(s). When attaching a file system policy to my EFS, using a deny entry with NotPrincipal, I'm not able to access the EFS as I would have expected to. Example file system policy: ``` { "Sid": "Limit to deployer/CI", "Effect": "Deny", "NotPrincipal": { "AWS": [ "arn:aws:sts::account_id:assumed-role/role_name/my_email@my_domain.com" ] }, "Action": [ "elasticfilesystem:DescribeMountTargets", ], "Resource": "arn:aws:elasticfilesystem:eu-west-2:account_id:file-system/efs_id" } ``` My expectation would be that my role session would have access to the listed action, but no one else would have access. However, when testing this, even my user is denied access. https://aws.amazon.com/blogs/security/how-to-restrict-amazon-s3-bucket-access-to-a-specific-iam-role/ suggests that both the `role` ARN and `assumed-role` ARN should be used in this scenario, however when testing this, it does not function. Following the logic used within the blog post, I can create the following: ``` { "Sid": "Limit to deployer", "Effect": "Deny", "Principal": { "AWS": "*" }, "Action": [ "elasticfilesystem:DescribeMountTargets" ], "Resource": "arn:aws:elasticfilesystem:eu-west-2:account_id:file-system/efs_id", "Condition": { "StringNotLike": { "aws:userId": [ "role_principal_id:my_email@my_domain.com", "account_id" ] } } } ``` This does appear to work as I intend, however I'd like to understand the reasoning behind the first example not working, because it is much more usable and easily understandable.
3
answers
0
votes
9
views
asked 4 months ago

ElasticSearch Container failed to start - ECS deployment using docker compose up - /usr/share/elasticsearch/data/nodes/ AccessDeinedException

Hi I'm trying to start an elasticsearch container via - docker compose (aws-cli and switching to ecs context), but it fails to start - AccessDeinedExcception - cant write to /usr/share/elasticsearch/data/nodes/ directory. I have researched the issue on google and its because of the permission on that folder - from my understanding I need to fix the permissions in the host directory mapped to /usr/share/elasticsearch/data/nodes/ (I think) running sudo chown -R 1000:1000 [directory} However my container shuts down and how am I supposed to update the permission on that directory? this is my docker-compose - any help appreciated version: '3.8' services: elasticsearch01: user: $USER image: docker.elastic.co/elasticsearch/elasticsearch:7.14.1 #image: 645694603269.dkr.ecr.eu-west-2.amazonaws.com/smpn_ecr:latest container_name: es02 restart: unless-stopped environment: cluster.name: docker-es-cluster discovery.type: single-node bootstrap.memory_lock: "true" # ES_JAVA_OPTS: "-Xms2g -Xmx2g" xpack.security.enabled: "false" xpack.monitoring.enabled: "false" xpack.watcher.enabled: "false" node.name: es01 network.host: 0.0.0.0 logger.level: DEBUG ulimits: memlock: soft: -1 hard: -1 volumes: - es_data01:/usr/share/elasticsearch/data:rw ports: - "9200:9200" - "9300:9300" healthcheck: test: "curl -f http://localhost:9200 || exit 1" networks: - smpn_network volumes: es_data01: driver: local networks: smpn_network: driver: bridge
0
answers
0
votes
15
views
asked 5 months ago

EC2 Task EFS mount issue

Hi, I have a task in a cluster of one EC2 node, and an EFS. In the task, I have defined the volume and mounted it to a location on EC2. From the logs I can see that it gets mounted however I dont see it with df -T or df -h. I am also able to mount it manually with sudo mount -t efs, which means that there is no connection issues between my ec2 and EFS. Thanks. Here is the logs: ecs-volume-plugin.log:level=info time=2021-12-04T20:11:58Z msg="Returning volume information for ecs-core-14-test-efs-test-86ded2bd90b585eb7f00" ecs-volume-plugin.log:level=info time=2021-12-04T20:41:49Z msg="Returning volume information for ecs-core-14-test-efs-test-86ded2bd90b585eb7f00" ecs-volume-plugin.log:level=info time=2021-12-04T20:41:49Z msg="Returning volume information for ecs-core-14-test-efs-test-86ded2bd90b585eb7f00" ecs-volume-plugin.log:level=info time=2021-12-04T20:41:49Z msg="Returning volume information for ecs-core-14-test-efs-test-86ded2bd90b585eb7f00" ecs-volume-plugin.log:level=info time=2021-12-04T20:41:54Z msg="Creating new volume ecs-core-14-test-efs-test-d4acb6f880d4faba5400" ecs-volume-plugin.log:level=info time=2021-12-04T20:41:54Z msg="Creating mount target for new volume ecs-core-14-test-efs-test-d4acb6f880d4faba5400" ecs-volume-plugin.log:level=info time=2021-12-04T20:41:54Z msg="Validating create options for volume ecs-core-14-test-efs-test-d4acb6f880d4faba5400" ecs-volume-plugin.log:level=info time=2021-12-04T20:41:54Z msg="Mounting volume ecs-core-14-test-efs-test-d4acb6f880d4faba5400 of type efs at path /var/lib/ecs/volume s/ecs-core-14-test-efs-test-d4acb6f880d4faba5400" ecs-volume-plugin.log:level=info time=2021-12-04T20:41:54Z msg="Volume ecs-core-14-test-efs-test-d4acb6f880d4faba5400 created successfully" ecs-volume-plugin.log:level=info time=2021-12-04T20:41:54Z msg="Saving state of new volume ecs-core-14-test-efs-test-d4acb6f880d4faba5400" ecs-volume-plugin.log:level=info time=2021-12-04T20:41:55Z msg="Returning volume information for ecs-core-14-test-efs-test-d4acb6f880d4faba5400" ecs-volume-plugin.log:level=info time=2021-12-04T20:41:56Z msg="Returning volume information for ecs-core-14-test-efs-test-d4acb6f880d4faba5400"
2
answers
0
votes
34
views
asked 5 months ago

Best way to expose files from EFS over HTTP(S)?

I have some dynamically-generated files *(more context below)* stored on EFS and need to expose these files over HTTPS. I'm wondering what the best way to do this would be… I've thought of a few ideas, some might be doable and others might not, I'm curious to see what other people think: 1. Setup a Cloudfront distribution and register my EFS as an Origin. This works fine for S3 but doesn't seem to be possible for EFS :-( 2. Setup some replication mechanism that would upload files to S3 as soon as they are created in EFS. I haven't checked yet if EFS can generate an Event *(maybe to EventBridge?)* when a file has just been created, but if it can, plugging another Lambda to copy from EFS to S3 would work… Or maybe a managed service would be able to do that for me? *(I don't really want to update my code to raise an event when a file has been generated, I'd rather have AWS generate that event automatically)* 3. Setup a Cloudfront -> API Gateway -> Lambda that would serve the file from EFS. Executing a lambda to serve a file is not optimal from a "cost" point of view, but those files could be cached by Cloudfront *forever*, making this approach OK-ish. Does one of these approaches sound like what you would do? Do you have another idea / recommendation? . More context: * The files are created on EFS by a lambda function -- when that Lambda function is called, it downloads an image and generates a thumbnail. That thumbnail is stored, as a *not-too-big* file, on EFS. * If the Lambda was running my own code, I would change it to write the thumbnail to S3 *(and set up a Cloudfront distribution to serve the thumbnails over HTTPS, idea #1)*. But this is not my code and I'm not too fond of modifying it… * When a thumbnail is generated, it needs to be available over HTTP quickly (delay of 1-5 seconds is Okay-ish, 1-5 minutes is not OK). * After a thumbnail has been generated, it is never updated. And thumbnails are rarely deleted (and keeping old "deleted" thumbnails for even days is OK) * Estimates: there will be between one and ten thousands thumbnails on EFS. Total size will be between 1 and 10 GB or so. * I expect only a few (a dozen, max) new thumbnails will be generated each day, which means a non-serverless and always-running approach will not be optimal from a "cost" point of view.
2
answers
1
votes
78
views
asked 6 months ago

Old EFS Access Points work but cannot mount a new one?

I am using EFS Access Points from an EC2 instance. I had initially set the access points I needed and can mount those with no problem using the efs file type, eg: _sudo mount -t efs -o tls,accesspoint=access-point-1-id efs-fs-id:/ mnt1_ This works and I can see and update files in the mounted file system. I recently added new access point under the same file system. However when I attempt to mount the new access point I get the following: _sudo mount -t efs -o tls,accesspoint=access-point-2-id efs-fs-id:/ mnt2_ _mount.nfs4: access denied by server while mounting 127.0.0.1:/_ _Exception in thread Thread-1 (most likely raised during interpreter shutdown):_ _Traceback (most recent call last):_ _File "/usr/lib64/python2.7/threading.py", line 804, in __bootstrap_inner_ _File "/usr/lib64/python2.7/threading.py", line 757, in run_ _File "/sbin/mount.efs", line 796, in poll_tunnel_process_ _<type 'exceptions.TypeError'>: 'NoneType' object is not callable_ Yet the original Access Point can be still mounted with no problem. These are both under the same Mount Target for the Subnet, which is available. The Network Interface shows up in the EC2 console as in-use and the associated Security Group has the NFS port accessible. It's certainly allowing access to the first Access Point. I have tried deleting the Access Point and the Mount Target. I then recreated the Access Point and added a Mount Target back, but I have the same result, the old Access Point mounts but I cannot mount the new one. My question is, why is access to the newly added Access Point denied? Have I forgotten to add the Access point to another security list or is there something in the system I need to restart for the new Access Point to be noticed? Edited by: JSDev on Sep 20, 2020 11:29 AM
5
answers
0
votes
53
views
asked 2 years ago

EFS - NFS Client Error while creating a directory or file: Remote I/O Error

Hi All, I setup a DataSync environment and create a task to transfer data from EFS to EFS. I created 2 different EFS. Source EFS in us-west-1 and target EDS in us-east-1. I created 2 locations. When I started the task it gives an error. Then I tried to regenerate the same error which is coming from source site and there was a hint in the datasync document that says run the following command sudo mount -t nfs -o nfsvers=4.1 ip-10-1-1-57.us-west-1.compute.internal:/ /efs I run the command and mounted my NFS Server. When I tried to create a directory or file it gives the below error: \[root@ip-10-1-1-191 efs]# sudo mkdir test1 mkdir: cannot create directory ‘test1’: Input/output error \[root@ip-10-1-1-191 efs]# ls -al ls: cannot access test2: Remote I/O error ls: cannot access test1: Remote I/O error total 4 drwxrwxrwx 4 root root 6144 Aug 16 21:22 . drwxrwxrwx 19 root root 268 Aug 16 19:31 .. ?????????? ? ? ? ? ? test1 ?????????? ? ? ? ? ? test2 EFS - NFS Server and NFS Client are in the same region and availability zone. I do not have any problem with mounting the EFS and NFS Server. NFS SERVER COMMANDS USED on EC2 (10.1.1.57) sudo su cd / sudo yum update -y sudo yum -y install nfs-utils sudo mkdir /efs sudo nano etc/exports sudo cat etc/exports OUTPUT: /efs *(rw,sync,fsid=0) sudo service nfs start sudo mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport fs-a25576bb.efs.us-west-1.amazonaws.com:/ efs sudo exportfs -r sudo showmount -e OUTPUT: Export list for ip-10-1-1-57.us-west-1.compute.internal: /efs * df -k OUTPUT: fs-a25576bb.efs.us-west-1.amazonaws.com:/ 9007199254739968 0 9007199254739968 0% /efs NFS CLIENT COMMANDS USED on EC2 (10.1.1.191) sudo su cd / sudo yum update -y sudo yum -y install nfs-utils sudo mkdir /efs sudo service nfs start sudo mount -t nfs -o nfsvers=4.1 ip-10-1-1-57.us-west-1.compute.internal:/ /efs df -k OUTPUT ip-10-1-1-57.us-west-1.compute.internal:/ 9007199254739968 0 9007199254739968 0% /efs \[root@ip-10-1-1-191 efs]# sudo mkdir test mkdir: cannot create directory ‘test’: Remote I/O error \[root@ip-10-1-1-191 efs]# ls -al ls: cannot access test: Remote I/O error total 4 drwxrwxrwx 3 root root 6144 Aug 16 22:58 . drwxrwxrwx 19 root root 268 Aug 16 22:55 .. ?????????? ? ? ? ? ? test Need an urgent help Best Regards, Hakan Korkmaz
1
answers
0
votes
8
views
asked 2 years ago

AWS EFS - file delete and recreate not detected programmatically for 25 to

I am observing a very large delay in EFS detecting that a file has been re-created i.e. deleted and then created again. In this simple test example, I have a single file that gets deleted and recreated around 5 seconds later. I have two EC2 instances mounted to the same EFS: EC2-1: responsible for reading the file EC2-2: responsible for deleting and creating the file. The problem I am seeing is that when EC2-2 deletes the file, EC2-1 correctly updates to say it is no longer present. EC2-2 then recreates the file around 5 seconds later. EC2-1 does not detect the file has returned for another 25 to 30 seconds. Now, if I run some sort of query on the file system on EC2-1 just after recreation (like and LS command), it DOES then immediately update to say the file is created. To be clear, I visually see the file get created on EC2-1 FS immediately after creation, just by running an LS. Its reading it programmatically that fails. In my test case I have a Node.js script that literally just calls readFileSync() every second. I have also tested with the same in Python to conclude this is an EFS issue. If I run the same script on EC2-2, is see expected results i.e. the file is missing for a second and then is available immediately once recreated. So, reading this on the instance that does the delete and create, works as expected. Its as if EFS is not detecting the file delete/recreation at all. OS is Ubuntu Server 18.04 on both EC2 VM's. Tested on new EFS of type "General Purpose" and "High I/O".
3
answers
0
votes
14
views
asked 2 years ago

Minimal latency for EFS OPEN(new file), WRITE(1 byte), RENAME and REMOVE?

Minimal latency for EFS OPEN(new file), WRITE(1 byte), RENAME and REMOVE? Thanks in advance for any help with this. I am evaluating EFS for use with an existing proprietary technology stack. Within the system there are many shards that each correspond to a database. When these databases are first opened there are (currently) several small files created, renamed and removed. The requirements are for each shard to be opened quickly, so ideally in under 50ms 95% of the time. I have noticed high latency with such operations when testing on EFS and am now wondering how to obtain minimal latency? I am testing with m4.10xlarge instance type in us-east-1d (using EFS DNS to mount in the same availability zone). I am in a VPC, could the VPC be adding latency? ``` Model vCPU* Mem (GiB) Storage Dedicated EBS Bandwidth (Mbps) Network Performance m4.10xlarge 40 160 EBS-only 4,000 10 Gigabit ``` Running amzn-ami-hvm-2018.03.0.20181129-x86_64-gp2 (ami-0080e4c5bc078760e). I started with a RHEL7.6 AMI but switched. I have tested EFS throughput modes provisioned 1024 MiB/s and Bursting, performance mode Max I/O and General Purpose (I read that MaxIO can have higher latency and I have observed this). All with 1.2TB of files on the filesystem and in the case of Bursting, plenty of Burst Credit Balance. Testing without encrytion at rest. Mount options are the default from the "Amazon EC2 mount instructions (from local VPC)", NFS client, so: ``` mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport fs-XXXX.efs.us-east-1.amazonaws.com:/ /efs ``` Testing without ssl. What (NFS RTT) latency figures to expect? So far, after 10 runs of the command below, I am seeing the 1 byte write (to a new file) client NFS RTT is around 10 millisenconds, with the open, rename and remove all being between 5ms to 8ms: This is on a 1024 MiB/s PIOPS General Purpose EFS. ``` mountstats /efs | egrep -B 3 'RTT: (([1-9][0-9])|([3-9]))' --no-group-separator WRITE: 320 ops (12%) 0 retrans (0%) 0 major timeouts avg bytes sent per op: 328 avg bytes received per op: 176 backlog wait: 0.012500 RTT: 10.775000 total execute time: 10.803125 (milliseconds) OPEN: 320 ops (12%) 0 retrans (0%) 0 major timeouts avg bytes sent per op: 404 avg bytes received per op: 460 backlog wait: 0.009375 RTT: 7.390625 total execute time: 7.456250 (milliseconds) REMOVE: 320 ops (12%) 0 retrans (0%) 0 major timeouts avg bytes sent per op: 288 avg bytes received per op: 116 backlog wait: 0.003125 RTT: 6.390625 total execute time: 6.431250 (milliseconds) RENAME: 320 ops (12%) 0 retrans (0%) 0 major timeouts avg bytes sent per op: 440 avg bytes received per op: 152 backlog wait: 0.009375 RTT: 5.750000 total execute time: 5.771875 (milliseconds) ``` This is on a 1024 MiB/s PIOPS MaxIO EFS. ``` mountstats /efs | egrep -B 3 'RTT: (([1-9][0-9])|([6-9]))' --no-group-separator WRITE: 320 ops (12%) 0 retrans (0%) 0 major timeouts avg bytes sent per op: 328 avg bytes received per op: 176 backlog wait: 0.012500 RTT: 13.746875 total execute time: 13.775000 (milliseconds) OPEN: 320 ops (12%) 0 retrans (0%) 0 major timeouts avg bytes sent per op: 404 avg bytes received per op: 460 backlog wait: 0.009375 RTT: 27.175000 total execute time: 27.196875 (milliseconds) REMOVE: 320 ops (12%) 0 retrans (0%) 0 major timeouts avg bytes sent per op: 288 avg bytes received per op: 116 backlog wait: 0.003125 RTT: 19.465625 total execute time: 19.515625 (milliseconds) RENAME: 320 ops (12%) 0 retrans (0%) 0 major timeouts avg bytes sent per op: 440 avg bytes received per op: 152 backlog wait: 0.012500 RTT: 19.046875 total execute time: 19.068750 (milliseconds) ``` Testing with this command: ``` export DIR=/efs ; rm $DIR/*.tmp ; ( time bash -c 'seq 1 32 | xargs -I {} bash -c "time ( dd if=/dev/zero of=$DIR/{}.tmp bs=1 count=1 conv=fdatasync ; mv $DIR/{}.tmp $DIR/mv{}.tmp ; rm $DIR/mv{}.tmp )" ' ) 2>&1 | grep real ``` Is this around what to expect from EFS? Can anything be done to lower this latency? I have read https://docs.aws.amazon.com/efs/latest/ug/performance.html "The distributed nature of Amazon EFS enables high levels of availability, durability, and scalability. This distributed architecture results in a small latency overhead for each file operation. Due to this per-operation latency, overall throughput generally increases as the average I/O size increases, because the overhead is amortized over a larger amount of data. Amazon EFS supports highly parallelized workloads (for example, using concurrent operations from multiple threads and multiple Amazon EC2 instances), which enables high levels of aggregate throughput and operations per second." ---- Can also be seen with this mainprog: ``` // Compile with: g++ -Wall -std=c++11 test.cc #include "stdio.h" #include <string.h> #include <errno.h> #include <unistd.h> #include <sstream> #include <vector> #include <iostream> #include <chrono> int main(int argc, char* argv[]){ std::vector<std::string> args(argv, argv + argc); if (args.size() != 2){ std::cout << "Usage: " << args[0] << " dir_path" << std::endl; return 1; } for(int i=1;i<32;i++){ std::ostringstream oss_file; std::ostringstream oss_file_rename; oss_file << args[1] << "/test_" << i << ".tmp"; oss_file_rename << args[1] << "/test_" << i << "_rename.tmp"; FILE *fptr; auto start = std::chrono::system_clock::now(); auto start_for_total = start; fptr = fopen(oss_file.str().c_str(), "w"); auto stop = std::chrono::system_clock::now(); if( NULL == fptr ){ printf("Could not open file '%s': %s\n", oss_file.str().c_str(), strerror(errno)); } printf("time in ms for fopen = %3ld ",std::chrono::duration_cast<std::chrono::milliseconds>(stop - start).count()); start = std::chrono::system_clock::now(); if( write( fileno(fptr), "X",1 ) <= 0 ){ printf("Could not write to file '%s': %s\n", oss_file.str().c_str(), strerror(errno)); } stop = std::chrono::system_clock::now(); printf("write = %3ld ",std::chrono::duration_cast<std::chrono::milliseconds>(stop - start).count()); start = std::chrono::system_clock::now(); if( 0 != fdatasync( fileno(fptr) )){ printf("Could not fdatasync file '%s': %s\n", oss_file.str().c_str(), strerror(errno)); } stop = std::chrono::system_clock::now(); printf("fdatasync = %3ld ",std::chrono::duration_cast<std::chrono::milliseconds>(stop - start).count()); start = std::chrono::system_clock::now(); if( 0 != fclose(fptr)){ printf("Could not fclose file '%s': %s\n", oss_file.str().c_str(), strerror(errno)); } stop = std::chrono::system_clock::now(); printf("fclose = %3ld ",std::chrono::duration_cast<std::chrono::milliseconds>(stop - start).count()); start = std::chrono::system_clock::now(); if( 0 != rename(oss_file.str().c_str(), oss_file_rename.str().c_str())){ printf("Could not rename file '%s' to file '%s': %s\n", oss_file.str().c_str(), oss_file_rename.str().c_str(), strerror(errno)); } stop = std::chrono::system_clock::now(); printf("rename = %3ld ",std::chrono::duration_cast<std::chrono::milliseconds>(stop - start).count()); start = std::chrono::system_clock::now(); if(unlink(oss_file_rename.str().c_str())!=0){ printf("Could not unlink file '%s': %s\n", oss_file.str().c_str(), strerror(errno)); } stop = std::chrono::system_clock::now(); printf("unlink = %3ld total = %3ld\n",std::chrono::duration_cast<std::chrono::milliseconds>(stop - start).count(), std::chrono::duration_cast<std::chrono::milliseconds>(stop - start_for_total).count()); } } ``` On the EBS SSD /tmp filesystem: ``` > time ./a.out /tmp time in ms for fopen = 0 write = 0 fdatasync = 3 fclose = 0 rename = 0 unlink = 0 total = 3 time in ms for fopen = 0 write = 0 fdatasync = 2 fclose = 0 rename = 0 unlink = 0 total = 2 time in ms for fopen = 0 write = 0 fdatasync = 2 fclose = 0 rename = 0 unlink = 0 total = 2 time in ms for fopen = 0 write = 0 fdatasync = 2 fclose = 0 rename = 0 unlink = 0 total = 2 time in ms for fopen = 0 write = 0 fdatasync = 2 fclose = 0 rename = 0 unlink = 0 total = 2 time in ms for fopen = 0 write = 0 fdatasync = 2 fclose = 0 rename = 0 unlink = 0 total = 2 time in ms for fopen = 0 write = 0 fdatasync = 2 fclose = 0 rename = 0 unlink = 0 total = 2 time in ms for fopen = 0 write = 0 fdatasync = 2 fclose = 0 rename = 0 unlink = 0 total = 2 time in ms for fopen = 0 write = 0 fdatasync = 2 fclose = 0 rename = 0 unlink = 0 total = 2 time in ms for fopen = 0 write = 0 fdatasync = 2 fclose = 0 rename = 0 unlink = 0 total = 2 time in ms for fopen = 0 write = 0 fdatasync = 2 fclose = 0 rename = 0 unlink = 0 total = 2 time in ms for fopen = 0 write = 0 fdatasync = 2 fclose = 0 rename = 0 unlink = 0 total = 2 time in ms for fopen = 0 write = 0 fdatasync = 2 fclose = 0 rename = 0 unlink = 0 total = 2 time in ms for fopen = 0 write = 0 fdatasync = 2 fclose = 0 rename = 0 unlink = 0 total = 2 time in ms for fopen = 0 write = 0 fdatasync = 3 fclose = 0 rename = 0 unlink = 0 total = 3 time in ms for fopen = 0 write = 0 fdatasync = 2 fclose = 0 rename = 0 unlink = 0 total = 2 time in ms for fopen = 0 write = 0 fdatasync = 2 fclose = 0 rename = 0 unlink = 0 total = 2 time in ms for fopen = 0 write = 0 fdatasync = 2 fclose = 0 rename = 0 unlink = 0 total = 2 time in ms for fopen = 0 write = 0 fdatasync = 2 fclose = 0 rename = 0 unlink = 0 total = 2 time in ms for fopen = 0 write = 0 fdatasync = 2 fclose = 0 rename = 0 unlink = 0 total = 2 time in ms for fopen = 0 write = 0 fdatasync = 2 fclose = 0 rename = 0 unlink = 0 total = 2 time in ms for fopen = 0 write = 0 fdatasync = 2 fclose = 0 rename = 0 unlink = 0 total = 2 time in ms for fopen = 0 write = 0 fdatasync = 2 fclose = 0 rename = 0 unlink = 0 total = 2 time in ms for fopen = 0 write = 0 fdatasync = 2 fclose = 0 rename = 0 unlink = 0 total = 2 time in ms for fopen = 0 write = 0 fdatasync = 2 fclose = 0 rename = 0 unlink = 0 total = 2 time in ms for fopen = 0 write = 0 fdatasync = 2 fclose = 0 rename = 0 unlink = 0 total = 2 time in ms for fopen = 0 write = 0 fdatasync = 2 fclose = 0 rename = 0 unlink = 0 total = 2 time in ms for fopen = 0 write = 0 fdatasync = 2 fclose = 0 rename = 0 unlink = 0 total = 2 time in ms for fopen = 0 write = 0 fdatasync = 2 fclose = 0 rename = 0 unlink = 0 total = 2 time in ms for fopen = 0 write = 0 fdatasync = 3 fclose = 0 rename = 0 unlink = 0 total = 3 time in ms for fopen = 0 write = 0 fdatasync = 2 fclose = 0 rename = 0 unlink = 0 total = 2 real 0m0.086s ``` On EFS General Purpose with 1024 MiB/s provisioned IOPS: ``` > time ./a.out /efs_gp_1024piops time in ms for fopen = 12 write = 0 fdatasync = 10 fclose = 0 rename = 7 unlink = 5 total = 37 time in ms for fopen = 7 write = 0 fdatasync = 10 fclose = 0 rename = 7 unlink = 5 total = 32 time in ms for fopen = 7 write = 0 fdatasync = 11 fclose = 1 rename = 13 unlink = 9 total = 42 time in ms for fopen = 7 write = 0 fdatasync = 11 fclose = 1 rename = 9 unlink = 7 total = 37 time in ms for fopen = 7 write = 0 fdatasync = 12 fclose = 2 rename = 11 unlink = 6 total = 40 time in ms for fopen = 10 write = 0 fdatasync = 13 fclose = 4 rename = 11 unlink = 5 total = 46 time in ms for fopen = 7 write = 0 fdatasync = 10 fclose = 0 rename = 20 unlink = 5 total = 44 time in ms for fopen = 8 write = 0 fdatasync = 15 fclose = 6 rename = 14 unlink = 7 total = 52 time in ms for fopen = 11 write = 0 fdatasync = 11 fclose = 3 rename = 15 unlink = 6 total = 48 time in ms for fopen = 8 write = 0 fdatasync = 17 fclose = 1 rename = 11 unlink = 6 total = 44 time in ms for fopen = 7 write = 0 fdatasync = 10 fclose = 1 rename = 8 unlink = 5 total = 32 time in ms for fopen = 7 write = 0 fdatasync = 11 fclose = 1 rename = 8 unlink = 6 total = 34 time in ms for fopen = 7 write = 0 fdatasync = 10 fclose = 0 rename = 8 unlink = 6 total = 34 time in ms for fopen = 8 write = 0 fdatasync = 10 fclose = 0 rename = 8 unlink = 5 total = 33 time in ms for fopen = 7 write = 0 fdatasync = 11 fclose = 0 rename = 7 unlink = 5 total = 33 time in ms for fopen = 7 write = 0 fdatasync = 10 fclose = 1 rename = 7 unlink = 5 total = 32 time in ms for fopen = 8 write = 0 fdatasync = 11 fclose = 0 rename = 7 unlink = 6 total = 34 time in ms for fopen = 7 write = 0 fdatasync = 9 fclose = 0 rename = 7 unlink = 5 total = 31 time in ms for fopen = 7 write = 0 fdatasync = 10 fclose = 1 rename = 8 unlink = 5 total = 32 time in ms for fopen = 7 write = 0 fdatasync = 11 fclose = 0 rename = 8 unlink = 6 total = 35 time in ms for fopen = 8 write = 0 fdatasync = 10 fclose = 1 rename = 7 unlink = 5 total = 32 time in ms for fopen = 8 write = 0 fdatasync = 10 fclose = 1 rename = 8 unlink = 5 total = 33 time in ms for fopen = 28 write = 0 fdatasync = 10 fclose = 0 rename = 7 unlink = 5 total = 54 time in ms for fopen = 8 write = 0 fdatasync = 10 fclose = 1 rename = 7 unlink = 6 total = 35 time in ms for fopen = 7 write = 0 fdatasync = 12 fclose = 1 rename = 11 unlink = 6 total = 39 time in ms for fopen = 8 write = 0 fdatasync = 9 fclose = 0 rename = 7 unlink = 6 total = 33 time in ms for fopen = 7 write = 0 fdatasync = 11 fclose = 1 rename = 8 unlink = 5 total = 35 time in ms for fopen = 7 write = 0 fdatasync = 10 fclose = 1 rename = 11 unlink = 5 total = 35 time in ms for fopen = 7 write = 0 fdatasync = 11 fclose = 0 rename = 8 unlink = 6 total = 35 time in ms for fopen = 7 write = 0 fdatasync = 10 fclose = 1 rename = 8 unlink = 5 total = 33 time in ms for fopen = 8 write = 0 fdatasync = 10 fclose = 1 rename = 7 unlink = 5 total = 33 real 0m1.167s ``` On EFS MaxIO with 1024 MiB/s provisioned IOPS: ``` > time ./a.out /efs_maxio_1024piops time in ms for fopen = 35 write = 0 fdatasync = 13 fclose = 0 rename = 22 unlink = 19 total = 91 time in ms for fopen = 26 write = 0 fdatasync = 12 fclose = 1 rename = 23 unlink = 19 total = 82 time in ms for fopen = 29 write = 0 fdatasync = 12 fclose = 1 rename = 31 unlink = 20 total = 95 time in ms for fopen = 27 write = 0 fdatasync = 13 fclose = 1 rename = 28 unlink = 19 total = 90 time in ms for fopen = 25 write = 0 fdatasync = 14 fclose = 1 rename = 24 unlink = 18 total = 84 time in ms for fopen = 28 write = 0 fdatasync = 11 fclose = 1 rename = 24 unlink = 22 total = 88 time in ms for fopen = 24 write = 0 fdatasync = 13 fclose = 1 rename = 32 unlink = 18 total = 90 time in ms for fopen = 27 write = 0 fdatasync = 11 fclose = 1 rename = 24 unlink = 19 total = 84 time in ms for fopen = 24 write = 0 fdatasync = 14 fclose = 1 rename = 22 unlink = 17 total = 80 time in ms for fopen = 27 write = 0 fdatasync = 12 fclose = 0 rename = 24 unlink = 21 total = 86 time in ms for fopen = 26 write = 0 fdatasync = 13 fclose = 1 rename = 26 unlink = 18 total = 85 time in ms for fopen = 26 write = 0 fdatasync = 13 fclose = 1 rename = 24 unlink = 17 total = 83 time in ms for fopen = 26 write = 0 fdatasync = 13 fclose = 1 rename = 23 unlink = 19 total = 84 time in ms for fopen = 27 write = 0 fdatasync = 12 fclose = 1 rename = 23 unlink = 18 total = 82 time in ms for fopen = 28 write = 0 fdatasync = 16 fclose = 0 rename = 23 unlink = 18 total = 87 time in ms for fopen = 28 write = 0 fdatasync = 13 fclose = 0 rename = 25 unlink = 19 total = 87 time in ms for fopen = 24 write = 0 fdatasync = 10 fclose = 0 rename = 23 unlink = 18 total = 77 time in ms for fopen = 28 write = 0 fdatasync = 15 fclose = 0 rename = 23 unlink = 19 total = 88 time in ms for fopen = 26 write = 0 fdatasync = 13 fclose = 1 rename = 21 unlink = 18 total = 81 time in ms for fopen = 25 write = 0 fdatasync = 13 fclose = 1 rename = 21 unlink = 16 total = 78 time in ms for fopen = 24 write = 0 fdatasync = 14 fclose = 1 rename = 26 unlink = 17 total = 83 time in ms for fopen = 26 write = 0 fdatasync = 14 fclose = 1 rename = 27 unlink = 20 total = 90 time in ms for fopen = 27 write = 0 fdatasync = 11 fclose = 1 rename = 25 unlink = 21 total = 86 time in ms for fopen = 24 write = 0 fdatasync = 11 fclose = 0 rename = 21 unlink = 17 total = 75 time in ms for fopen = 29 write = 0 fdatasync = 16 fclose = 1 rename = 24 unlink = 17 total = 88 time in ms for fopen = 27 write = 0 fdatasync = 13 fclose = 0 rename = 23 unlink = 31 total = 96 time in ms for fopen = 25 write = 0 fdatasync = 14 fclose = 1 rename = 23 unlink = 17 total = 83 time in ms for fopen = 27 write = 0 fdatasync = 13 fclose = 1 rename = 21 unlink = 17 total = 81 time in ms for fopen = 28 write = 0 fdatasync = 14 fclose = 1 rename = 22 unlink = 17 total = 84 time in ms for fopen = 24 write = 0 fdatasync = 13 fclose = 1 rename = 23 unlink = 18 total = 81 time in ms for fopen = 26 write = 0 fdatasync = 12 fclose = 0 rename = 23 unlink = 18 total = 81 real 0m2.649s ``` On EFS General Purpose in Bursting config: ``` > time ./a.out /efs_burst time in ms for fopen = 7 write = 0 fdatasync = 30 fclose = 0 rename = 25 unlink = 4 total = 68 time in ms for fopen = 5 write = 0 fdatasync = 7 fclose = 0 rename = 5 unlink = 4 total = 23 time in ms for fopen = 5 write = 0 fdatasync = 7 fclose = 0 rename = 5 unlink = 4 total = 23 time in ms for fopen = 5 write = 0 fdatasync = 8 fclose = 0 rename = 5 unlink = 4 total = 24 time in ms for fopen = 6 write = 0 fdatasync = 8 fclose = 0 rename = 5 unlink = 4 total = 25 time in ms for fopen = 4 write = 0 fdatasync = 7 fclose = 0 rename = 5 unlink = 6 total = 25 time in ms for fopen = 6 write = 0 fdatasync = 7 fclose = 0 rename = 6 unlink = 3 total = 25 time in ms for fopen = 5 write = 0 fdatasync = 8 fclose = 0 rename = 5 unlink = 4 total = 24 time in ms for fopen = 5 write = 0 fdatasync = 7 fclose = 0 rename = 6 unlink = 7 total = 28 time in ms for fopen = 4 write = 0 fdatasync = 8 fclose = 0 rename = 5 unlink = 4 total = 23 time in ms for fopen = 5 write = 0 fdatasync = 8 fclose = 0 rename = 6 unlink = 4 total = 25 time in ms for fopen = 5 write = 0 fdatasync = 9 fclose = 0 rename = 7 unlink = 5 total = 28 time in ms for fopen = 6 write = 0 fdatasync = 8 fclose = 0 rename = 6 unlink = 4 total = 26 time in ms for fopen = 5 write = 0 fdatasync = 7 fclose = 0 rename = 6 unlink = 4 total = 24 time in ms for fopen = 5 write = 0 fdatasync = 9 fclose = 0 rename = 6 unlink = 4 total = 26 time in ms for fopen = 6 write = 0 fdatasync = 9 fclose = 0 rename = 6 unlink = 4 total = 27 time in ms for fopen = 6 write = 0 fdatasync = 8 fclose = 0 rename = 6 unlink = 4 total = 26 time in ms for fopen = 6 write = 0 fdatasync = 7 fclose = 0 rename = 6 unlink = 3 total = 25 time in ms for fopen = 5 write = 0 fdatasync = 7 fclose = 0 rename = 6 unlink = 4 total = 23 time in ms for fopen = 5 write = 0 fdatasync = 7 fclose = 0 rename = 6 unlink = 4 total = 24 time in ms for fopen = 7 write = 0 fdatasync = 8 fclose = 0 rename = 6 unlink = 4 total = 26 time in ms for fopen = 5 write = 0 fdatasync = 10 fclose = 0 rename = 6 unlink = 4 total = 28 time in ms for fopen = 5 write = 0 fdatasync = 8 fclose = 0 rename = 6 unlink = 4 total = 25 time in ms for fopen = 5 write = 0 fdatasync = 8 fclose = 0 rename = 6 unlink = 4 total = 25 time in ms for fopen = 5 write = 0 fdatasync = 11 fclose = 0 rename = 5 unlink = 4 total = 27 time in ms for fopen = 5 write = 0 fdatasync = 7 fclose = 0 rename = 5 unlink = 3 total = 23 time in ms for fopen = 6 write = 0 fdatasync = 16 fclose = 0 rename = 6 unlink = 4 total = 33 time in ms for fopen = 7 write = 0 fdatasync = 8 fclose = 0 rename = 6 unlink = 4 total = 26 time in ms for fopen = 5 write = 0 fdatasync = 7 fclose = 0 rename = 5 unlink = 4 total = 23 time in ms for fopen = 4 write = 0 fdatasync = 7 fclose = 0 rename = 5 unlink = 4 total = 23 time in ms for fopen = 4 write = 0 fdatasync = 7 fclose = 0 rename = 7 unlink = 3 total = 24 real 0m0.845s ``` Thanks again for any input. Edited by: Indiana on Apr 25, 2019 12:28 PM Edited by: Indiana on Apr 25, 2019 12:28 PM Edited by: Indiana on Apr 26, 2019 2:40 AM Edited by: Indiana on Apr 26, 2019 2:54 AM
1
answers
0
votes
7
views
asked 3 years ago
  • 1
  • 90 / page