Questions tagged with Amazon Elastic File System

Content language: English

Sort by most recent

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Cannot import numba python package inside AWS Lambda function

I have an AWS Lambda function with Python 3.7 runtime and AWSLambda-Python37-SciPy1x lambda layer added. I have my python dependencies (such as numba) installed on a EFS directory that I add to the path that the Lambda function can access. (I installed numba with ```pip install --target . numba==0.47.0```, for version 0.47.0, for example) I can inport numpy, but trying to import numba gives the following error: ``` Runtime.ImportModuleError: Unable to import module 'lambda_function': Numba could not be imported. If you are seeing this message and are undertaking Numba development work, you may need to re-run: python setup.py build_ext --inplace (Also, please check the development set up guide http://numba.pydata.org/numba-doc/latest/developer/contributing.html.) If you are not working on Numba development: Please report the error message and traceback, along with a minimal reproducer at: https://github.com/numba/numba/issues/new If more help is needed please feel free to speak to the Numba core developers directly at: https://gitter.im/numba/numba Thanks in advance for your help in improving Numba! The original error was: 'cannot import name '_typeconv' from 'numba.typeconv' (/mnt/access/numba/typeconv/__init__.py)' -------------------------------------------------------------------------------- If possible please include the following in your error report: sys.executable: /var/lang/bin/python3.7 Traceback (most recent call last): ``` I tried with the numba versions 0.45.0, 0.47.0, 0.48.0, 0.49.0, 0.49.1, 0.55.1 and the error is the same in all versions. I saw this response, but when I delete de numba files from the EFS directory, i get a No module named 'numba' error, which indicates that there is no other numba version installed. Below is the full code of the AWS Lambda function: ``` import sys sys.path.append("/mnt/access") import os, json, sys import numba def lambda_handler(event, context): return { 'statusCode': 200, 'body': json.dumps('All OK') } ``` Is there a specific way I should be installing numba? *ps: I describe this exact problem in [this issue on the numba repository](https://github.com/numba/numba/issues/7975), but unfortunately the numba mainteiners do not have familiarity with aws lambda, so I coundn't get help.
1
answers
0
votes
175
views
asked 8 months ago

Design questions on asg, backup restore, ebs and efs

Hi experts, We are designing to deploy a BI application in AWS. We have a default setting to repave the ec2 instance every 14 days which means it will rebuild the whole cluster instances with services and bring back it to last known good state. We want to have a solution with no/minimal downtime. The application has different services provisioned on different ec2 instances. First server will be like a main node and rest are additional nodes with different services running on them. We install all additional nodes same way but configure services later in the code deploy. 1. Can we use asg? If yes, how can we distribute the topology? Which mean out of 5 instances, if one server repaves, then that server should come up with the same services as the previous one. Is there a way to label in asg saying that this server should configure as certain service? 1. Each server should have its own ebs volume and stores some data in it. - what is the fastest way to copy or attach the ebs volume to new repaves server without downtime? 2. For shared data we want to use EFS 3. for metadata from embedded Postgres - we need to take a backup periodically and restore after repave(create new instance with install and same service) - how can we achieve this without downtime? We do not want to use customized AMI as we have a big process for ami creation and we often need to change it if we want to add install and config in it. Sorry if this is a lot to answers. Some guidance is helpful.
1
answers
0
votes
51
views
asked 8 months ago

EFS performance/cost optimization

We have a relatively small EFS of about 20G in burst mode, it was setup about 2 months ago and there were not much performance issue, utilization are always under 2% even under our max load usage (only for a very short period of time) And yesterday, we suddenly noticed that our site are not responding, but our server have very minimal CPU loads. Then we saw that the utilization of the EFS suddenly went up to 100%, digging deeper, it seems that we had been slowing and consistently consuming the original 2.3T BurstCreditBalance for the past few weeks, and it went to zero yesterday. Problems 1. The EFS monitoring tab provided completely useless information and does NOT even include the report of BurstCreditBalance, we had to find it in CloudWatch ourselves. 2. The Throughput utilization is misleading that we are actually slowly using up the credits, but there are no indications of such 3. We had since switched to Provisioned mode at 10MBps in the meantime as we're not really sure how to get the correct throughput number we need for our system. CloudWatch is showing 1s average max value of MeteredIOBytes 7.3k, DataReadIOBytes 770k, DataWriteIOBytes 780k. 4. we're seeing BurstCreditBalance build up much quicker (w 10MBps Provisioned) than we had used previously (in Burst). However, when we switched to 2MBps Provisioned, our system is visibly throttled even though there are 1T BurstCreditBalance, why? Main questions 1. How to properly define a Provisioned rate that is not too excessive, but not limiting our system when it needs to use it based on the CloudWatch metrics? 2. Ideally, we'd like to use Burst as that fits better, but with just 20GB, we don't seem to accumulate any BurstCreditBalance
1
answers
0
votes
142
views
asked 8 months ago