By using AWS re:Post, you agree to the Terms of Use

Developer Tools

Host code, build, test, and deploy your applications quickly and effectively with AWS Developer Tools. Leverage core tools like software development kits (SDKs), code editors, and continuous integration and delivery (CI/CD) services for DevOps software development. Use machine learning (ML)-guided best practices and abstractions to improve agility, security, velocity, and code quality.

Recent questions

see all
1/18

How to install playwright in elastic beanstalk via .ebextensions?

I need to install [Playwright](https://playwright.dev/docs/ci#introduction) in my Elastic beanstalk. So, I am using this command in `.ebextensions` `.ebextensions/01_install_playwright.config` ``` container_commands: install_playwright: command: "npx playwright install --with-deps chromium" ``` But it's getting error out. Here are the logs from cfn-init.log - ``` 2022-09-29 05:16:17,188 [INFO] -----------------------Starting build----------------------- 2022-09-29 05:16:17,194 [INFO] Running configSets: Infra-EmbeddedPostBuild 2022-09-29 05:16:17,197 [INFO] Running configSet Infra-EmbeddedPostBuild 2022-09-29 05:16:17,200 [INFO] Running config postbuild_0_test_worker 2022-09-29 05:16:18,246 [ERROR] Command install_playwright (npx playwright install --with-deps chromium) failed 2022-09-29 05:16:18,246 [ERROR] Error encountered during build of postbuild_0_test_worker: Command install_playwright failed Traceback (most recent call last): File "/usr/lib/python3.7/site-packages/cfnbootstrap/construction.py", line 576, in run_config CloudFormationCarpenter(config, self._auth_config).build(worklog) File "/usr/lib/python3.7/site-packages/cfnbootstrap/construction.py", line 276, in build self._config.commands) File "/usr/lib/python3.7/site-packages/cfnbootstrap/command_tool.py", line 127, in apply raise ToolError(u"Command %s failed" % name) cfnbootstrap.construction_errors.ToolError: Command install_playwright failed 2022-09-29 05:16:18,247 [ERROR] -----------------------BUILD FAILED!------------------------ 2022-09-29 05:16:18,247 [ERROR] Unhandled exception during build: Command install_playwright failed Traceback (most recent call last): File "/opt/aws/bin/cfn-init", line 176, in <module> worklog.build(metadata, configSets) File "/usr/lib/python3.7/site-packages/cfnbootstrap/construction.py", line 137, in build Contractor(metadata).build(configSets, self) File "/usr/lib/python3.7/site-packages/cfnbootstrap/construction.py", line 564, in build self.run_config(config, worklog) File "/usr/lib/python3.7/site-packages/cfnbootstrap/construction.py", line 576, in run_config CloudFormationCarpenter(config, self._auth_config).build(worklog) File "/usr/lib/python3.7/site-packages/cfnbootstrap/construction.py", line 276, in build self._config.commands) File "/usr/lib/python3.7/site-packages/cfnbootstrap/command_tool.py", line 127, in apply raise ToolError(u"Command %s failed" % name) cfnbootstrap.construction_errors.ToolError: Command install_playwright failed ``` Am I missing something or any suggestion on how to run the npx command on .ebextensions. I also [posted on SO](https://stackoverflow.com/questions/73890683/how-to-install-playwright-dependency-in-elastic-beanstalk-ebextensions) but didn't see any response. While everything is working fine on code pipeline using buildspec.yml and same command ``` phases: install: runtime-versions: nodejs: 16 #nodejs: latest pre_build: commands: - echo Installing source NPM dependencies... - npm install - echo Installing Chromium... - npx playwright install --with-deps chromium ```
0
answers
0
votes
7
views
asked 4 hours ago

Automatically stop CodeDeploy ECS Blue/Green deployment on unhealthy containers

We are writing a CI/CD setup where we remotely trigger a CodePipeline pipeline which fetches its task definition and appspec.yaml from S3 and includes a CodeDeploy ECS Blue/Green step for updating an ECS service. Images are pushed to ECR also remotely. This setup works and if the to-be-deployed application is not faulty and well configured the deployment succeeds in under 5 minutes. However, if the application does not pass health checks, or the task definition is broken, CodeDeploy will continuously re-deploy this revision during its "Install" step without end, creating tens of stopped tasks in the ECS Service. According to some this should time out after an hour, however we have not tested this. What we would like to achieve is automatic stops and rollbacks of these failing deployments. Ideally CodeDeploy should try only once to deploy the application and if that fails, immediately cancel the deployment and thus the pipeline run. According to the AWS documentation no options for this exist in CodeDeploy or the appspec.yaml that we upload to S3, so we are unsure of how to configure this if it is at all possible. We had two wanted scenarios in mind: 1. After one health check failure, the deployment stops and rolls back; 2. The deployment times out after a period shorter than one hour; ideally < 10 minutes. We currently have no alarms attached to the CodeDeploy deployment group, but it was my understanding that these alarms only trigger before the installation step to verify that the deployment can proceed instead of running alongside the deployment. In short; how would we configure either of those scenarios or at least prevent CodeDeploy from endlessly deploying replacement task sets?
0
answers
0
votes
7
views
asked a day ago

using transformers module with sagemaker studio project: ModuleNotFoundError: No module named 'transformers'

So as mentioned in my [other recent post](https://repost.aws/questions/QUAL9Vn9abQ6KKCs2ASwwmzg/adjusting-sagemaker-xgboost-project-to-tensorflow-or-even-just-different-folder-name), I'm trying to modify the sagemaker example abalone xgboost template to use tensorfow. My current problem is that running the pipeline I get a failure and in the logs I see: ``` ModuleNotFoundError: No module named 'transformers' ``` NOTE: I am importing 'transformers' in `preprocess.py` not in `pipeline.py` Now I have 'transformers' listed in various places as a dependency including: * `setup.py` - `required_packages = ["sagemaker==2.93.0", "sklearn", "transformers", "openpyxl"]` * `pipelines.egg-info/requires.txt` - `transformers` (auto-generated from setup.py?) but so I'm keen to understand, how can I ensure that additional dependencies are available in the pipline itself? Many thanks in advance ------------ ------------ ------------ ADDITIONAL DETAILS ON HOW I ENCOUNTERED THE ERROR From one particular notebook (see [previous post](https://repost.aws/questions/QUAL9Vn9abQ6KKCs2ASwwmzg/adjusting-sagemaker-xgboost-project-to-tensorflow-or-even-just-different-folder-name) for more details) I have succesfully constructed the new topic/tensorflow pipeline and run the following steps: ``` pipeline.upsert(role_arn=role) execution = pipeline.start() execution.describe() ``` the `describe()` method gives this output: ``` {'PipelineArn': 'arn:aws:sagemaker:eu-west-1:398371982844:pipeline/topicpipeline-example', 'PipelineExecutionArn': 'arn:aws:sagemaker:eu-west-1:398371982844:pipeline/topicpipeline-example/execution/0aiczulkjoaw', 'PipelineExecutionDisplayName': 'execution-1664394415255', 'PipelineExecutionStatus': 'Executing', 'PipelineExperimentConfig': {'ExperimentName': 'topicpipeline-example', 'TrialName': '0aiczulkjoaw'}, 'CreationTime': datetime.datetime(2022, 9, 28, 19, 46, 55, 147000, tzinfo=tzlocal()), 'LastModifiedTime': datetime.datetime(2022, 9, 28, 19, 46, 55, 147000, tzinfo=tzlocal()), 'CreatedBy': {'UserProfileArn': 'arn:aws:sagemaker:eu-west-1:398371982844:user-profile/d-5qgy6ubxlbdq/sjoseph-reg-genome-com-273', 'UserProfileName': 'sjoseph-reg-genome-com-273', 'DomainId': 'd-5qgy6ubxlbdq'}, 'LastModifiedBy': {'UserProfileArn': 'arn:aws:sagemaker:eu-west-1:398371982844:user-profile/d-5qgy6ubxlbdq/sjoseph-reg-genome-com-273', 'UserProfileName': 'sjoseph-reg-genome-com-273', 'DomainId': 'd-5qgy6ubxlbdq'}, 'ResponseMetadata': {'RequestId': 'f949d6f4-1865-4a01-b7a2-a96c42304071', 'HTTPStatusCode': 200, 'HTTPHeaders': {'x-amzn-requestid': 'f949d6f4-1865-4a01-b7a2-a96c42304071', 'content-type': 'application/x-amz-json-1.1', 'content-length': '882', 'date': 'Wed, 28 Sep 2022 19:47:02 GMT'}, 'RetryAttempts': 0}} ``` Waiting for the execution I get: ``` --------------------------------------------------------------------------- WaiterError Traceback (most recent call last) <ipython-input-14-72be0c8b7085> in <module> ----> 1 execution.wait() /opt/conda/lib/python3.7/site-packages/sagemaker/workflow/pipeline.py in wait(self, delay, max_attempts) 581 waiter_id, model, self.sagemaker_session.sagemaker_client 582 ) --> 583 waiter.wait(PipelineExecutionArn=self.arn) 584 585 /opt/conda/lib/python3.7/site-packages/botocore/waiter.py in wait(self, **kwargs) 53 # method. 54 def wait(self, **kwargs): ---> 55 Waiter.wait(self, **kwargs) 56 57 wait.__doc__ = WaiterDocstring( /opt/conda/lib/python3.7/site-packages/botocore/waiter.py in wait(self, **kwargs) 376 name=self.name, 377 reason=reason, --> 378 last_response=response, 379 ) 380 if num_attempts >= max_attempts: WaiterError: Waiter PipelineExecutionComplete failed: Waiter encountered a terminal failure state: For expression "PipelineExecutionStatus" we matched expected path: "Failed" ``` Which I assume is corresponding to the failure I see in the logs: ![buildl pipeline error message on preprocessing step](/media/postImages/original/IMMpF6LeI6TgWxp20TnPZbUw) I did also run `python setup.py build` to ensure my build directory was up to date ... here's the terminal output of that command: ``` sagemaker-user@studio$ python setup.py build /opt/conda/lib/python3.9/site-packages/setuptools/dist.py:771: UserWarning: Usage of dash-separated 'description-file' will not be supported in future versions. Please use the underscore name 'description_file' instead warnings.warn( /opt/conda/lib/python3.9/site-packages/setuptools/config/setupcfg.py:508: SetuptoolsDeprecationWarning: The license_file parameter is deprecated, use license_files instead. warnings.warn(msg, warning_class) running build running build_py copying pipelines/topic/pipeline.py -> build/lib/pipelines/topic running egg_info writing pipelines.egg-info/PKG-INFO writing dependency_links to pipelines.egg-info/dependency_links.txt writing entry points to pipelines.egg-info/entry_points.txt writing requirements to pipelines.egg-info/requires.txt writing top-level names to pipelines.egg-info/top_level.txt reading manifest file 'pipelines.egg-info/SOURCES.txt' adding license file 'LICENSE' writing manifest file 'pipelines.egg-info/SOURCES.txt' ``` It seems like the dependencies are being written to `pipelines.egg-info/requires.txt` but are these not being picked up by the pipeline?
1
answers
0
votes
27
views
asked 2 days ago

adjusting sagemaker xgboost project to tensorflow (or even just different folder name)

I have sagemaker xgboost project template "build, train, deploy" working, but I'd like to modify if to use tensorflow instead of xgboost. First up I was just trying to change the `abalone` folder to `topic` to reflect the data we are working with. I was experimenting with trying to change the `topic/pipeline.py` file like so ``` image_uri = sagemaker.image_uris.retrieve( framework="tensorflow", region=region, version="1.0-1", py_version="py3", instance_type=training_instance_type, ) ``` i.e. just changing the framework name from "xgboost" to "tensorflow", but then when I run the following from a notebook: ``` from pipelines.topic.pipeline import get_pipeline pipeline = get_pipeline( region=region, role=role, default_bucket=default_bucket, model_package_group_name=model_package_group_name, pipeline_name=pipeline_name, ) ``` I get the following error ``` ValueError Traceback (most recent call last) <ipython-input-5-6343f00c3471> in <module> 7 default_bucket=default_bucket, 8 model_package_group_name=model_package_group_name, ----> 9 pipeline_name=pipeline_name, 10 ) ~/topic-models-no-monitoring-p-rboparx6tdeg/sagemaker-topic-models-no-monitoring-p-rboparx6tdeg-modelbuild/pipelines/topic/pipeline.py in get_pipeline(region, sagemaker_project_arn, role, default_bucket, model_package_group_name, pipeline_name, base_job_prefix, processing_instance_type, training_instance_type) 188 version="1.0-1", 189 py_version="py3", --> 190 instance_type=training_instance_type, 191 ) 192 tf_train = Estimator( /opt/conda/lib/python3.7/site-packages/sagemaker/workflow/utilities.py in wrapper(*args, **kwargs) 197 logger.warning(warning_msg_template, arg_name, func_name, type(value)) 198 kwargs[arg_name] = value.default_value --> 199 return func(*args, **kwargs) 200 201 return wrapper /opt/conda/lib/python3.7/site-packages/sagemaker/image_uris.py in retrieve(framework, region, version, py_version, instance_type, accelerator_type, image_scope, container_version, distribution, base_framework_version, training_compiler_config, model_id, model_version, tolerate_vulnerable_model, tolerate_deprecated_model, sdk_version, inference_tool, serverless_inference_config) 152 if inference_tool == "neuron": 153 _framework = f"{framework}-{inference_tool}" --> 154 config = _config_for_framework_and_scope(_framework, image_scope, accelerator_type) 155 156 original_version = version /opt/conda/lib/python3.7/site-packages/sagemaker/image_uris.py in _config_for_framework_and_scope(framework, image_scope, accelerator_type) 277 image_scope = available_scopes[0] 278 --> 279 _validate_arg(image_scope, available_scopes, "image scope") 280 return config if "scope" in config else config[image_scope] 281 /opt/conda/lib/python3.7/site-packages/sagemaker/image_uris.py in _validate_arg(arg, available_options, arg_name) 443 "Unsupported {arg_name}: {arg}. You may need to upgrade your SDK version " 444 "(pip install -U sagemaker) for newer {arg_name}s. Supported {arg_name}(s): " --> 445 "{options}.".format(arg_name=arg_name, arg=arg, options=", ".join(available_options)) 446 ) 447 ValueError: Unsupported image scope: None. You may need to upgrade your SDK version (pip install -U sagemaker) for newer image scopes. Supported image scope(s): eia, inference, training. ``` I was skeptical that the upgrade suggested by the error message would fix this, but gave it a try: ``` ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. pipelines 0.0.1 requires sagemaker==2.93.0, but you have sagemaker 2.110.0 which is incompatible. ``` So that seems like I can't upgrade sagemaker without changing pipelines, and it's not clear that's the right thing to do - like this project template may be all designed around those particular ealier libraries. But so is it that the "framework" name should be different, e.g. "tf"? Or is there some other setting that needs changing in order to allow me to get a tensorflow pipeline ...? However I find that if I use the existing `abalone/pipeline.py` file I can change the framework to "tensorflow" and there's no problem running that particular step in the notebook. I've searched all the files in the project to try and find any dependency on the `abalone` folder name, and the closest I came was in `codebuild-buildspec.yml` but that hasn't helped. Has anyone else successfully changed the folder name from `abalone` to something else, or am I stuck with `abalone` if I want to make progress? Many thanks in advance p.s. is there a slack community for sagemaker studio anywhere? p.p.s. I have tried changing all instances of the term "Abalone" to "Topic" within the `topic/pipeline.py` file (matching case as appropriate) to no avail p.p.p.s. I discovered that I can get an error free run of getting the pipeline from a unit test: ``` import pytest from pipelines.topic.pipeline import * region = 'eu-west-1' role = 'arn:aws:iam::398371982844:role/SageMakerExecutionRole' default_bucket = 'sagemaker-eu-west-1-398371982844' model_package_group_name = 'TopicModelPackageGroup-Example' pipeline_name = 'TopicPipeline-Example' def test_pipeline(): pipeline = get_pipeline( region=region, role=role, default_bucket=default_bucket, model_package_group_name=model_package_group_name, pipeline_name=pipeline_name, ) ``` and strangely if I go to a different copy of the notebook, everything runs fine, there ... so I have two seemingly identical ipynb notebooks, and in one of them when I switch to trying to get a topic pipeline I get the above error, and in the other, I get no error at all, very strange p.p.p.p.s. I also notice that `conda list` returns very different results depending on whether I run it in the notebook or the terminal ... but the conda list results are identical for the two notebooks ...
1
answers
0
votes
20
views
asked 2 days ago

Unable to execute HTTP request: Connect to sts.us-east-1.amazonaws.com:443 [sts.us-east-1.amazonaws.com/209.54.177.185] failed: Connect timed out

Sometimes I am getting the below error from sts while API call. I am not able to find the root cause of this error. ``` Unable to execute HTTP request: Connect to sts.us-east-1.amazonaws.com:443 [sts.us-east-1.amazonaws.com/209.54.177.185] failed: Connect timed out ``` Stack Trace JSON ``` { "message": "Unable to execute HTTP request: Connect to sts.us-east-1.amazonaws.com:443 [sts.us-east-1.amazonaws.com/209.54.177.185] failed: Connect timed out", "source": "JavaSDK", "stackTrace": "software.amazon.awssdk.core.exception.SdkClientException$BuilderImpl.build(SdkClientException.java:102)", "cause": { "message": "Connect to sts.us-east-1.amazonaws.com:443 [sts.us-east-1.amazonaws.com/209.54.177.185] failed: Connect timed out", "source": "JavaSDK", "stackTrace": "org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:151)", "cause": { "message": "Connect timed out", "source": "JavaSDK", "stackTrace": "java.base/sun.nio.ch.NioSocketImpl.timedFinishConnect(NioSocketImpl.java:546)\njava.base/sun.nio.ch.NioSocketImpl.connect(NioSocketImpl.java:597)", "cause": null, "applicationFailureInfo": { "type": "java.net.SocketTimeoutException", "nonRetryable": false, "details": null } }, "applicationFailureInfo": { "type": "org.apache.http.conn.ConnectTimeoutException", "nonRetryable": false, "details": null } }, "applicationFailureInfo": { "type": "software.amazon.awssdk.core.exception.SdkClientException", "nonRetryable": false, "details": null } } ```
0
answers
0
votes
14
views
asked 3 days ago

SDK and ChainableTemporaryCredentials

Hi, I already posted my problem in: https://stackoverflow.com/questions/73702466/chainabletemporarycredentials-getpromise-and-missing-credentials-in-config-if-u Basically it is the following. When I use ``` const credentials = new ChainableTemporaryCredentials({ params: { RoleArn: 'arn:aws:iam::${this.accountId}:role/${this.targetRoleName}', RoleSessionName: this.targetRoleName, }, masterCredentials: new WebIdentityCredentials({ RoleArn: 'arn:aws:iam::<proxyAccountId>:role/<proxyRoleName>', RoleSessionName: this.proxyRoleName, WebIdentityToken: token, }), }) await credentials.getPromise() ``` with `token` a a token received from GCP-cloud do I still need some kind of AWS_ACCESS_KEY_ID/AWS_SECRET_ACCESS_KEY in my environment? I don't think so, since the idea of the token is to grant access exactly without such credentials. Right? (In the codeblock above I had to manipulate some charaters because the code-template here in the forum had some difficulties withe original 1:1 code...) At runtime I get always an error message: `Missing credentials in config, if using AWS_CONFIG_FILE, set AWS_SDK_LOAD_CONFIG=1` I think I have not to use AWS_CONFIG_FILE: My application runs in GCP and just want access AWS via STS. My token looks good so far as I would assess: ``` { "aud": <here my email address of the service account in GCP>, "azp": "21 digit number", "email": <same email as under "aud">, "email_verified": true, "exp": <10 digit number>, "iat": <10 digit number>, "iss": "https://accounts.google.com", "sub": "<same number as under azp>" } ``` Are my expectations wrong? What is the reason for the error message? Best regards Thomas
2
answers
0
votes
13
views
asked 3 days ago
1
answers
0
votes
11
views
asked 5 days ago

Recent articles

see all
1/2

Popular users

see all
1/18

Learn AWS faster by following popular topics

1/1