Questions tagged with Machine Learning & AI

Content language: English

Sort by most recent

Browse through the questions and answers listed below or filter and sort to narrow down your results.

AWS SageMaker - Extending Pre-built Container, Deploy Endpoint Failed. No such file or directory: 'serve'"

I am trying to deploy the SageMaker Inference Endpoint by extending the Pre-built image. However, it failed with "FileNotFoundError: [Errno 2] No such file or directory: 'serve'" My Dockerfile ``` ARG REGION=us-west-2 # SageMaker PyTorch image FROM 763104351884.dkr.ecr.$REGION.amazonaws.com/pytorch-inference:1.12.1-gpu-py38-cu116-ubuntu20.04-ec2 RUN apt-get update ENV PATH="/opt/ml/code:${PATH}" # this environment variable is used by the SageMaker PyTorch container to determine our user code directory. ENV SAGEMAKER_SUBMIT_DIRECTORY /opt/ml/code # /opt/ml and all subdirectories are utilized by SageMaker, use the /code subdirectory to store your user code. COPY inference.py /opt/ml/code/inference.py # Defines inference.py as script entrypoint ENV SAGEMAKER_PROGRAM inference.py ``` CloudWatch Log From /aws/sagemaker/Endpoints/mytestEndpoint ``` 2022-09-30T04:47:09.178-07:00 Traceback (most recent call last): File "/usr/local/bin/dockerd-entrypoint.py", line 20, in <module> subprocess.check_call(shlex.split(' '.join(sys.argv[1:]))) File "/opt/conda/lib/python3.8/subprocess.py", line 359, in check_call retcode = call(*popenargs, **kwargs) File "/opt/conda/lib/python3.8/subprocess.py", line 340, in call with Popen(*popenargs, **kwargs) as p: File "/opt/conda/lib/python3.8/subprocess.py", line 858, in __init__ self._execute_child(args, executable, preexec_fn, close_fds, File "/opt/conda/lib/python3.8/subprocess.py", line 1704, in _execute_child raise child_exception_type(errno_num, err_msg, err_filename) Traceback (most recent call last): File "/usr/local/bin/dockerd-entrypoint.py", line 20, in <module> subprocess.check_call(shlex.split(' '.join(sys.argv[1:]))) File "/opt/conda/lib/python3.8/subprocess.py", line 359, in check_call retcode = call(*popenargs, **kwargs) File "/opt/conda/lib/python3.8/subprocess.py", line 340, in call with Popen(*popenargs, **kwargs) as p: File "/opt/conda/lib/python3.8/subprocess.py", line 858, in __init__ self._execute_child(args, executable, preexec_fn, close_fds, File "/opt/conda/lib/python3.8/subprocess.py", line 1704, in _execute_child raise child_exception_type(errno_num, err_msg, err_filename) 2022-09-30T04:47:13.409-07:00 FileNotFoundError: [Errno 2] No such file or directory: 'serve' ```
2
answers
0
votes
95
views
profile picture
asked 2 months ago

using transformers module with sagemaker studio project: ModuleNotFoundError: No module named 'transformers'

So as mentioned in my [other recent post](https://repost.aws/questions/QUAL9Vn9abQ6KKCs2ASwwmzg/adjusting-sagemaker-xgboost-project-to-tensorflow-or-even-just-different-folder-name), I'm trying to modify the sagemaker example abalone xgboost template to use tensorfow. My current problem is that running the pipeline I get a failure and in the logs I see: ``` ModuleNotFoundError: No module named 'transformers' ``` NOTE: I am importing 'transformers' in `preprocess.py` not in `pipeline.py` Now I have 'transformers' listed in various places as a dependency including: * `setup.py` - `required_packages = ["sagemaker==2.93.0", "sklearn", "transformers", "openpyxl"]` * `pipelines.egg-info/requires.txt` - `transformers` (auto-generated from setup.py?) but so I'm keen to understand, how can I ensure that additional dependencies are available in the pipline itself? Many thanks in advance ------------ ------------ ------------ ADDITIONAL DETAILS ON HOW I ENCOUNTERED THE ERROR From one particular notebook (see [previous post](https://repost.aws/questions/QUAL9Vn9abQ6KKCs2ASwwmzg/adjusting-sagemaker-xgboost-project-to-tensorflow-or-even-just-different-folder-name) for more details) I have succesfully constructed the new topic/tensorflow pipeline and run the following steps: ``` pipeline.upsert(role_arn=role) execution = pipeline.start() execution.describe() ``` the `describe()` method gives this output: ``` {'PipelineArn': 'arn:aws:sagemaker:eu-west-1:398371982844:pipeline/topicpipeline-example', 'PipelineExecutionArn': 'arn:aws:sagemaker:eu-west-1:398371982844:pipeline/topicpipeline-example/execution/0aiczulkjoaw', 'PipelineExecutionDisplayName': 'execution-1664394415255', 'PipelineExecutionStatus': 'Executing', 'PipelineExperimentConfig': {'ExperimentName': 'topicpipeline-example', 'TrialName': '0aiczulkjoaw'}, 'CreationTime': datetime.datetime(2022, 9, 28, 19, 46, 55, 147000, tzinfo=tzlocal()), 'LastModifiedTime': datetime.datetime(2022, 9, 28, 19, 46, 55, 147000, tzinfo=tzlocal()), 'CreatedBy': {'UserProfileArn': 'arn:aws:sagemaker:eu-west-1:398371982844:user-profile/d-5qgy6ubxlbdq/sjoseph-reg-genome-com-273', 'UserProfileName': 'sjoseph-reg-genome-com-273', 'DomainId': 'd-5qgy6ubxlbdq'}, 'LastModifiedBy': {'UserProfileArn': 'arn:aws:sagemaker:eu-west-1:398371982844:user-profile/d-5qgy6ubxlbdq/sjoseph-reg-genome-com-273', 'UserProfileName': 'sjoseph-reg-genome-com-273', 'DomainId': 'd-5qgy6ubxlbdq'}, 'ResponseMetadata': {'RequestId': 'f949d6f4-1865-4a01-b7a2-a96c42304071', 'HTTPStatusCode': 200, 'HTTPHeaders': {'x-amzn-requestid': 'f949d6f4-1865-4a01-b7a2-a96c42304071', 'content-type': 'application/x-amz-json-1.1', 'content-length': '882', 'date': 'Wed, 28 Sep 2022 19:47:02 GMT'}, 'RetryAttempts': 0}} ``` Waiting for the execution I get: ``` --------------------------------------------------------------------------- WaiterError Traceback (most recent call last) <ipython-input-14-72be0c8b7085> in <module> ----> 1 execution.wait() /opt/conda/lib/python3.7/site-packages/sagemaker/workflow/pipeline.py in wait(self, delay, max_attempts) 581 waiter_id, model, self.sagemaker_session.sagemaker_client 582 ) --> 583 waiter.wait(PipelineExecutionArn=self.arn) 584 585 /opt/conda/lib/python3.7/site-packages/botocore/waiter.py in wait(self, **kwargs) 53 # method. 54 def wait(self, **kwargs): ---> 55 Waiter.wait(self, **kwargs) 56 57 wait.__doc__ = WaiterDocstring( /opt/conda/lib/python3.7/site-packages/botocore/waiter.py in wait(self, **kwargs) 376 name=self.name, 377 reason=reason, --> 378 last_response=response, 379 ) 380 if num_attempts >= max_attempts: WaiterError: Waiter PipelineExecutionComplete failed: Waiter encountered a terminal failure state: For expression "PipelineExecutionStatus" we matched expected path: "Failed" ``` Which I assume is corresponding to the failure I see in the logs: ![buildl pipeline error message on preprocessing step](/media/postImages/original/IMMpF6LeI6TgWxp20TnPZbUw) I did also run `python setup.py build` to ensure my build directory was up to date ... here's the terminal output of that command: ``` sagemaker-user@studio$ python setup.py build /opt/conda/lib/python3.9/site-packages/setuptools/dist.py:771: UserWarning: Usage of dash-separated 'description-file' will not be supported in future versions. Please use the underscore name 'description_file' instead warnings.warn( /opt/conda/lib/python3.9/site-packages/setuptools/config/setupcfg.py:508: SetuptoolsDeprecationWarning: The license_file parameter is deprecated, use license_files instead. warnings.warn(msg, warning_class) running build running build_py copying pipelines/topic/pipeline.py -> build/lib/pipelines/topic running egg_info writing pipelines.egg-info/PKG-INFO writing dependency_links to pipelines.egg-info/dependency_links.txt writing entry points to pipelines.egg-info/entry_points.txt writing requirements to pipelines.egg-info/requires.txt writing top-level names to pipelines.egg-info/top_level.txt reading manifest file 'pipelines.egg-info/SOURCES.txt' adding license file 'LICENSE' writing manifest file 'pipelines.egg-info/SOURCES.txt' ``` It seems like the dependencies are being written to `pipelines.egg-info/requires.txt` but are these not being picked up by the pipeline?
1
answers
0
votes
77
views
asked 2 months ago

adjusting sagemaker xgboost project to tensorflow (or even just different folder name)

I have sagemaker xgboost project template "build, train, deploy" working, but I'd like to modify if to use tensorflow instead of xgboost. First up I was just trying to change the `abalone` folder to `topic` to reflect the data we are working with. I was experimenting with trying to change the `topic/pipeline.py` file like so ``` image_uri = sagemaker.image_uris.retrieve( framework="tensorflow", region=region, version="1.0-1", py_version="py3", instance_type=training_instance_type, ) ``` i.e. just changing the framework name from "xgboost" to "tensorflow", but then when I run the following from a notebook: ``` from pipelines.topic.pipeline import get_pipeline pipeline = get_pipeline( region=region, role=role, default_bucket=default_bucket, model_package_group_name=model_package_group_name, pipeline_name=pipeline_name, ) ``` I get the following error ``` ValueError Traceback (most recent call last) <ipython-input-5-6343f00c3471> in <module> 7 default_bucket=default_bucket, 8 model_package_group_name=model_package_group_name, ----> 9 pipeline_name=pipeline_name, 10 ) ~/topic-models-no-monitoring-p-rboparx6tdeg/sagemaker-topic-models-no-monitoring-p-rboparx6tdeg-modelbuild/pipelines/topic/pipeline.py in get_pipeline(region, sagemaker_project_arn, role, default_bucket, model_package_group_name, pipeline_name, base_job_prefix, processing_instance_type, training_instance_type) 188 version="1.0-1", 189 py_version="py3", --> 190 instance_type=training_instance_type, 191 ) 192 tf_train = Estimator( /opt/conda/lib/python3.7/site-packages/sagemaker/workflow/utilities.py in wrapper(*args, **kwargs) 197 logger.warning(warning_msg_template, arg_name, func_name, type(value)) 198 kwargs[arg_name] = value.default_value --> 199 return func(*args, **kwargs) 200 201 return wrapper /opt/conda/lib/python3.7/site-packages/sagemaker/image_uris.py in retrieve(framework, region, version, py_version, instance_type, accelerator_type, image_scope, container_version, distribution, base_framework_version, training_compiler_config, model_id, model_version, tolerate_vulnerable_model, tolerate_deprecated_model, sdk_version, inference_tool, serverless_inference_config) 152 if inference_tool == "neuron": 153 _framework = f"{framework}-{inference_tool}" --> 154 config = _config_for_framework_and_scope(_framework, image_scope, accelerator_type) 155 156 original_version = version /opt/conda/lib/python3.7/site-packages/sagemaker/image_uris.py in _config_for_framework_and_scope(framework, image_scope, accelerator_type) 277 image_scope = available_scopes[0] 278 --> 279 _validate_arg(image_scope, available_scopes, "image scope") 280 return config if "scope" in config else config[image_scope] 281 /opt/conda/lib/python3.7/site-packages/sagemaker/image_uris.py in _validate_arg(arg, available_options, arg_name) 443 "Unsupported {arg_name}: {arg}. You may need to upgrade your SDK version " 444 "(pip install -U sagemaker) for newer {arg_name}s. Supported {arg_name}(s): " --> 445 "{options}.".format(arg_name=arg_name, arg=arg, options=", ".join(available_options)) 446 ) 447 ValueError: Unsupported image scope: None. You may need to upgrade your SDK version (pip install -U sagemaker) for newer image scopes. Supported image scope(s): eia, inference, training. ``` I was skeptical that the upgrade suggested by the error message would fix this, but gave it a try: ``` ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. pipelines 0.0.1 requires sagemaker==2.93.0, but you have sagemaker 2.110.0 which is incompatible. ``` So that seems like I can't upgrade sagemaker without changing pipelines, and it's not clear that's the right thing to do - like this project template may be all designed around those particular ealier libraries. But so is it that the "framework" name should be different, e.g. "tf"? Or is there some other setting that needs changing in order to allow me to get a tensorflow pipeline ...? However I find that if I use the existing `abalone/pipeline.py` file I can change the framework to "tensorflow" and there's no problem running that particular step in the notebook. I've searched all the files in the project to try and find any dependency on the `abalone` folder name, and the closest I came was in `codebuild-buildspec.yml` but that hasn't helped. Has anyone else successfully changed the folder name from `abalone` to something else, or am I stuck with `abalone` if I want to make progress? Many thanks in advance p.s. is there a slack community for sagemaker studio anywhere? p.p.s. I have tried changing all instances of the term "Abalone" to "Topic" within the `topic/pipeline.py` file (matching case as appropriate) to no avail p.p.p.s. I discovered that I can get an error free run of getting the pipeline from a unit test: ``` import pytest from pipelines.topic.pipeline import * region = 'eu-west-1' role = 'arn:aws:iam::398371982844:role/SageMakerExecutionRole' default_bucket = 'sagemaker-eu-west-1-398371982844' model_package_group_name = 'TopicModelPackageGroup-Example' pipeline_name = 'TopicPipeline-Example' def test_pipeline(): pipeline = get_pipeline( region=region, role=role, default_bucket=default_bucket, model_package_group_name=model_package_group_name, pipeline_name=pipeline_name, ) ``` and strangely if I go to a different copy of the notebook, everything runs fine, there ... so I have two seemingly identical ipynb notebooks, and in one of them when I switch to trying to get a topic pipeline I get the above error, and in the other, I get no error at all, very strange p.p.p.p.s. I also notice that `conda list` returns very different results depending on whether I run it in the notebook or the terminal ... but the conda list results are identical for the two notebooks ...
1
answers
0
votes
43
views
asked 2 months ago

SageMaker MultiDataModel deployment error during inference. ValueError: Exactly one .pth or .pt file is required for PyTorch models: []

Hello, I've been trying to deploy multiple PyTorch models on one endpoint on SageMaker from a SageMaker Notebook. First I tested deployment of single models on single endpoints, to check if everything works smoothly and it did. I would create a PyTorchModel first: ``` import sagemaker from sagemaker.pytorch import PyTorchModel from sagemaker import get_execution_role from sagemaker.multidatamodel import MultiDataModel from sagemaker.serializers import JSONSerializer from sagemaker.deserializers import JSONDeserializer import boto3 role = get_execution_role() sagemaker_session = sagemaker.Session() pytorch_model = PyTorchModel( entry_point='inference.py', source_dir='code', role=role, model_data='s3://***/model/model.tar.gz', framework_version='1.11.0', py_version='py38', name='***-model', sagemaker_session=sagemaker_session ) ``` MultiDataModel inherits properties from Model classes, so I used the same PyTorch model that I used for single model deployment. Then I would define the MultiDataModel the following way: ``` models = MultiDataModel(name='***-multi-model', model_data_prefix='s3://***-sagemaker/model/', model=pytorch_model, sagemaker_session=sagemaker_session ) ``` All it should need is the prefix to the S3 bucket of the model artifacts saved as tar.gz files (the same files used for single model deployment), the previously defined PyTorch model, a name and a sagemaker_session. To deploy it: ``` models.deploy(initial_instance_count =1, instance_type='ml.m4.xlarge', serializer=JSONSerializer(), deserializer=JSONDeserializer(), endpoint_name='***-multi-model-deployment', ) ``` The deployment goes well, as there are no failures and the endpoint is InService by the end of this step. However the error occurs when I try to run inference on one of the models: ``` import json body = {"url":"https://***image.jpg"} #url to an image online payload = json.dumps(body) client = boto3.client('sagemaker-runtime') response = client.invoke_endpoint( EndpointName = "***-multi-model-deployment", ContentType = "application/json", TargetModel = "/model.tar.gz", Body = payload) ``` This prompts an error message: ``` ModelError: An error occurred (ModelError) when calling the InvokeEndpoint operation: Received server error (500) from model with message "{ "code": 500, "type": "InternalServerException", "message": "Failed to start workers for model ec1cd509c40ca81ffc3fb09deb4599e2 version: 1.0" } ". See https://***.console.aws.amazon.com/cloudwatch/home?region=***#logEventViewer:group=/aws/sagemaker/Endpoints/***-multi-model-deployment in account ***** for more information. ``` The Cloudwatch logs show this error in particular: ``` 22-09-26T15:51:40,494 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - File "/opt/conda/lib/python3.8/site-packages/ts/model_service_worker.py", line 210, in <module> 2022-09-26T15:51:40,494 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - worker.run_server() 2022-09-26T15:51:40,494 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - File "/opt/conda/lib/python3.8/site-packages/ts/model_service_worker.py", line 181, in run_server 2022-09-26T15:51:40,495 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - self.handle_connection(cl_socket) 2022-09-26T15:51:40,495 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - File "/opt/conda/lib/python3.8/site-packages/ts/model_service_worker.py", line 139, in handle_connection 2022-09-26T15:51:40,495 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - service, result, code = self.load_model(msg) 2022-09-26T15:51:40,495 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - File "/opt/conda/lib/python3.8/site-packages/ts/model_service_worker.py", line 104, in load_model 2022-09-26T15:51:40,495 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - service = model_loader.load( 2022-09-26T15:51:40,495 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - File "/opt/conda/lib/python3.8/site-packages/ts/model_loader.py", line 151, in load 2022-09-26T15:51:40,495 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - initialize_fn(service.context) 2022-09-26T15:51:40,495 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - File "/opt/conda/lib/python3.8/site-packages/sagemaker_pytorch_serving_container/handler_service.py", line 51, in initialize 2022-09-26T15:51:40,495 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - super().initialize(context) 2022-09-26T15:51:40,495 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - File "/opt/conda/lib/python3.8/site-packages/sagemaker_inference/default_handler_service.py", line 66, in initialize 2022-09-26T15:51:40,495 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - self._service.validate_and_initialize(model_dir=model_dir) 2022-09-26T15:51:40,495 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - File "/opt/conda/lib/python3.8/site-packages/sagemaker_inference/transformer.py", line 162, in validate_and_initialize 2022-09-26T15:51:40,495 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - self._model = self._model_fn(model_dir) 2022-09-26T15:51:40,495 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - File "/opt/conda/lib/python3.8/site-packages/sagemaker_pytorch_serving_container/default_pytorch_inference_handler.py", line 73, in default_model_fn 2022-09-26T15:51:40,495 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - raise ValueError( 2022-09-26T15:51:40,496 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - ValueError: Exactly one .pth or .pt file is required for PyTorch models: [] ``` It seems like it's having problems loading the model, saying only one .pth file is required, however in the invocation function i point to the exact model artifact present at that S3 bucket prefix. I'm having a hard time trying to fix this issue, so it would be very helpful if anyone had some suggestions! Instead of giving the MultiDataModel a model, I also tried providing it an ECR docker image with the same inference code, but I would get the same error during invocation of the endpoint.
1
answers
0
votes
107
views
asked 2 months ago