Recent questions

see all
1/18
0
answers
0
votes
6
views
asked 5 hours ago

Unable to deploy Greengrass V2 available component

**Environment:** Greengrass V2 Nucleus (2.9.1) running as Docker container. We developed one `aws.greengrass.generic` component and have successfully published 4 versions (1.0.0, 1.0.1, 1.0.2 and 1.0.3). All three versions are published, displayed in Greengrass V2 console and artifacts correctly available in the S3 bucket. ``` [2022-11-26 17:32:54] INFO - Getting project configuration from gdk-config.json [2022-11-26 17:32:54] INFO - Found component recipe file 'recipe.yaml' in the project directory. [2022-11-26 17:32:54] INFO - Found credentials in shared credentials file: ~/.aws/credentials [2022-11-26 17:32:55] INFO - Using '1.0.3' as the next version of the component '<my-component>' to create. [2022-11-26 17:32:55] INFO - Publishing the component '<my-component>' with the given project configuration. [2022-11-26 17:32:55] INFO - Uploading the component built artifacts to s3 bucket. [2022-11-26 17:32:55] INFO - Uploading component artifacts to S3 bucket: <my-bucket>. If this is your first time using this bucket, add the 's3:GetObject' permission to each core device's token exchange role to allow it to download the component artifacts. For more information, see https://docs.aws.amazon.com/greengrass/v2/developerguide/device-service-role.html. [2022-11-26 17:32:56] INFO - Successfully created the artifacts bucket '<my-bucket>' in region 'us-east-1' [2022-11-26 17:32:58] INFO - Updating the component recipe <my-component>-1.0.3. [2022-11-26 17:32:58] INFO - Creating a new greengrass <my-component>-1.0.3 [2022-11-26 17:32:58] INFO - Created private version '1.0.3' of the component in the account.'<my-component>'. ``` **However**, we apparently can only deploy version `1.0.0` successfully. All other versions fail. ``` 2022-11-26T20:19:01.427Z [INFO] (pool-2-thread-47) com.aws.greengrass.componentmanager.ComponentManager: prepare-package-start. {packageIdentifier=<my-component>-v1.0.0} ``` When we try to deploy *ANY* newer version (e.g. `1.0.3`), we always get this error: ``` 2022-11-26T20:36:10.987Z [ERROR] (pool-2-thread-51) com.aws.greengrass.deployment.DeploymentService: Error occurred while processing deployment. {deploymentId=3a4cb705-db16-4b5b-94b4-30e53f0edc9b, serviceName=DeploymentService, currentState=RUNNING} java.util.concurrent.ExecutionException: com.aws.greengrass.componentmanager.exceptions.NoAvailableComponentVersionException: No local or cloud component version satisfies the requirements Check whether the version constraints conflict and that the component exists in your AWS account with a version that matches the version constraints. If the version constraints conflict, revise deployments to resolve the conflict. Component <my-component> version constraints: thing/<thing-name> requires =1.0.3. ... 2022-11-26T20:36:20.733Z [ERROR] (pool-2-thread-9) com.aws.greengrass.deployment.DeploymentService: Deployment task failed with following errors. {DeploymentId=arn:aws:greengrass:us-east-1:<account-id>:configuration:thing/<my-thing>:5, detailed-deployment-status=FAILED_NO_STATE_CHANGE, deployment-error-types=[REQUEST_ERROR], GreengrassDeploymentId=3a4cb705-db16-4b5b-94b4-30e53f0edc9b, serviceName=DeploymentService, currentState=RUNNING, deployment-error-stack=[DEPLOYMENT_FAILURE, NO_AVAILABLE_COMPONENT_VERSION, COMPONENT_VERSION_REQUIREMENTS_NOT_MET]} com.aws.greengrass.componentmanager.exceptions.NoAvailableComponentVersionException: No local or cloud component version satisfies the requirements Check whether the version constraints conflict and that the component exists in your AWS account with a version that matches the version constraints. If the version constraints conflict, revise deployments to resolve the conflict. Component <my-component> version constraints: thing/<my-thing> requires =1.0.3. at com.aws.greengrass.componentmanager.ComponentManager.negotiateVersionWithCloud(ComponentManager.java:229) at com.aws.greengrass.componentmanager.ComponentManager.resolveComponentVersion(ComponentManager.java:164) at com.aws.greengrass.componentmanager.DependencyResolver.lambda$resolveDependencies$2(DependencyResolver.java:125) at com.aws.greengrass.componentmanager.DependencyResolver.resolveComponentDependencies(DependencyResolver.java:221) at com.aws.greengrass.componentmanager.DependencyResolver.resolveDependencies(DependencyResolver.java:123) at com.aws.greengrass.deployment.DefaultDeploymentTask.lambda$call$2(DefaultDeploymentTask.java:125) ``` As stated, the component is available in Greengrass V2 console and artifacts correctly published in S3.
2
answers
0
votes
28
views
profile picture
rodmaz
asked 13 hours ago

AWS Parameters and Secrets Lambda Extension does not work with parameter ARN's

The [AWS documentation](https://docs.aws.amazon.com/systems-manager/latest/userguide/ps-integration-lambda-extensions.html#sample-commands-ps) for the Parameters and Secrets Lambda Extension states: ``` To make a call using the Amazon Resource Name (ARN) for a parameter, make an HTTP GET call similar to the following. GET http://localhost:port/systemsmanager/parameters/get?name=arn:aws:ssm:us-east-1:123456789012:parameter/MyParameter ``` however these requests return a 400 stating the parameter name is invalid. Here's a quick example to demonstrate the successful request using the parameter name, and the failed request using the parameter ARN: ```py import json import os from botocore.vendored import requests def lambda_handler(event, context): name_url = 'http://localhost:2773/systemsmanager/parameters/get?name=test-param' arn_url = 'http://localhost:2773/systemsmanager/parameters/get?name=arn:aws:ssm:us-east-2:{ACCOUNT_ID}:parameter/test-param' headers = {'X-Aws-Parameters-Secrets-Token': os.environ['AWS_SESSION_TOKEN']} name_resp = requests.get(name_url, headers=headers) print(f'NAME RESPONSE: {name_resp.status_code} > {name_resp.text}') arn_resp = requests.get(arn_url, headers=headers) print(f'ARN RESPONSE: {arn_resp.status_code} > {arn_resp.text}') ``` and the output: ``` NAME RESPONSE: 200 > {"Parameter":{"ARN":"arn:aws:ssm:us-east-2:{ACCOUNT_ID}:parameter/test-param","DataType":"text","LastModifiedDate":"2022-11-26T02:25:14.669Z","Name":"test-param","Selector":null,"SourceResult":null,"Type":"SecureString","Value":"AQICAH....=","Version":2},"ResultMetadata":{}} ARN RESPONSE: 400 > an unexpected error occurred while executing request [AWS Parameters and Secrets Lambda Extension] 2022/11/26 18:09:36 ERROR GetParameter request encountered an error: operation error SSM: GetParameter, https response error StatusCode: 400, RequestID: {REQUEST_ID}, api error ValidationException: Invalid parameter name. Please use correct syntax for referencing a version/label <name>:<version/label> ``` The docs also state: ``` When using GET calls, parameter values must be encoded for HTTP to preserve special characters. ``` however the error still occurs whether the ARN colons and/or slash are URL-encoded or not like so: ``` http://localhost:2773/systemsmanager/parameters/get?name=arn%3Aaws%3Assm%3Aus-east-2%3A{ACCOUNT_ID}%3Aparameter/test-param http://localhost:2773/systemsmanager/parameters/get?name=arn%3Aaws%3Assm%3Aus-east-2%3A{ACCOUNT_ID}%3Aparameter%2Ftest-param ``` Am I missing something here or is the documentation incorrect in that an ARN can be used for these requests?
0
answers
0
votes
13
views
andy
asked 15 hours ago

Issue in the installation of the AWS CodeDeploy Agent in EC2 instance with AMI - Ubuntu Server 22.04 LTS

have created an Amazon EC2 instance with AMI - Ubuntu Server 22.04 LTS (Free Tier) and try to install AWS CodeDeploy Agent into it. I am following the official documentation from Amazon to install CodeDeploy agent into the Ubuntu server. [https://docs.aws.amazon.com/codedeploy/latest/userguide/codedeploy-agent-operations-install-ubuntu.html](AWS Documentation Link) I have connected the EC2 instance with SSH Keypairs and run the following commands: ``` sudo apt update sudo apt install ruby-full sudo apt install wget wget https://aws-codedeploy-ap-south-1.s3.ap-south-1.amazonaws.com/latest/install chmod +x ./install sudo ./install auto ``` In the last command, I got the following error message: ``` I, [2022-11-25T20:22:45.262298 #4303] INFO -- : Starting Ruby version check. E, [2022-11-25T20:22:45.262740 #4303] ERROR -- :Current running Ruby version for root is 3.0.2, but Ruby version 2.x needs to be installed. E, [2022-11-25T20:22:45.262959 #4303] ERROR -- : If you already have the proper Ruby version installed, please either create a symlink to /usr/bin/ruby2.x, E, [2022-11-25T20:22:45.263173 #4303] ERROR -- : or run this install script with right interpreter. Otherwise please install Ruby 2.x for root user. E, [2022-11-25T20:22:45.263378 #4303] ERROR -- : You can get more information by running the script with --help option. ``` Please let me know if there is any other hacks, I should employ to install AWS CodeDeploy agent in the EC2 instance with with AMI - Ubuntu Server 22.04 LTS (Free Tier). Thanks in advance!!
1
answers
0
votes
6
views
asked 2 days ago

sagemakee endpoint failing with ""An error occurred (ModelError) when calling the InvokeEndpoint operation: Received client error (413) from primary and could not load the entire response body""

Hello, I have created sagemaker endpoint by following https://github.com/huggingface/notebooks/blob/main/sagemaker/20_automatic_speech_recognition_inference/sagemaker-notebook.ipynb and this is failing with error ""An error occurred (ModelError) when calling the InvokeEndpoint operation: Received client error (413) from primary and could not load the entire response body"". The predict function returning me following error but CW log does not have any error details for the endpoint. ``` ModelError Traceback (most recent call last) /tmp/ipykernel_16248/2846183179.py in 2 # audio_path = "s3://ml-backend-sales-call-audio/sales-call-audio/1279881599154831602.playback.mp3" 3 audio_path = "/home/ec2-user/SageMaker/finetune-deploy-bert-with-amazon-sagemaker-for-hugging-face/1279881599154831602.playback.mp3" ## AS OF NOW have stored locally in notebook instance ----> 4 res = predictor.predict(data=audio_path) 5 print(res) ~/anaconda3/envs/amazonei_pytorch_latest_p37/lib/python3.7/site-packages/sagemaker/predictor.py in predict(self, data, initial_args, target_model, target_variant, inference_id) 159 data, initial_args, target_model, target_variant, inference_id 160 ) --> 161 response = self.sagemaker_session.sagemaker_runtime_client.invoke_endpoint(**request_args) 162 return self._handle_response(response) 163 ~/anaconda3/envs/amazonei_pytorch_latest_p37/lib/python3.7/site-packages/botocore/client.py in _api_call(self, *args, **kwargs) 493 ) 494 # The "self" in this scope is referring to the BaseClient. --> 495 return self._make_api_call(operation_name, kwargs) 496 497 _api_call.name = str(py_operation_name) ~/anaconda3/envs/amazonei_pytorch_latest_p37/lib/python3.7/site-packages/botocore/client.py in _make_api_call(self, operation_name, api_params) 912 error_code = parsed_response.get("Error", {}).get("Code") 913 error_class = self.exceptions.from_code(error_code) --> 914 raise error_class(parsed_response, operation_name) 915 else: 916 return parsed_response ModelError: An error occurred (ModelError) when calling the InvokeEndpoint operation: Received client error (413) from primary and could not load the entire response body. See https://us-east-1.console.aws.amazon.com/cloudwatch/home?region=us-east-1#logEventViewer:group=/aws/sagemaker/Endpoints/asr-facebook-wav2vec2-base-960h-2022-11-25-19-27-19 in account xxxx for more information. ` ```
1
answers
0
votes
30
views
asked 2 days ago

Glue running in Docker not able to find com.mysql.cj.jdbc.Driver

Following along with this [blog post](https://aws.amazon.com/blogs/big-data/develop-and-test-aws-glue-version-3-0-jobs-locally-using-a-docker-container/) I'm attempting to debug/breakpoint my glue tasks running in VS Code using `amazon/aws-glue-libs:glue_libs_3.0.0_image_01`. I can get up to the point where the job executes and I can step through the code right up until the point I try and connect to RDS to fetch data. As soon as I do I get back ``` An error occurred while calling o47.getDynamicFrame. : java.lang.ClassNotFoundException: com.mysql.cj.jdbc.Driver at java.net.URLClassLoader.findClass(URLClassLoader.java:387) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at com.amazonaws.services.glue.util.JDBCUtils.loadDriver(JDBCUtils.scala:214) at com.amazonaws.services.glue.util.JDBCUtils.loadDriver$(JDBCUtils.scala:212) at com.amazonaws.services.glue.util.MySQLUtils$.loadDriver(JDBCUtils.scala:490) at com.amazonaws.services.glue.util.JDBCWrapper.getRawConnection(JDBCUtils.scala:746) at com.amazonaws.services.glue.JDBCDataSource.getPrimaryKeys(DataSource.scala:1006) at com.amazonaws.services.glue.JDBCDataSource.$anonfun$getJdbcJobBookmark$1(DataSource.scala:878) at scala.collection.MapLike.getOrElse(MapLike.scala:131) at scala.collection.MapLike.getOrElse$(MapLike.scala:129) at scala.collection.AbstractMap.getOrElse(Map.scala:63) at com.amazonaws.services.glue.JDBCDataSource.getJdbcJobBookmark(DataSource.scala:878) at com.amazonaws.services.glue.JDBCDataSource.getDynamicFrame(DataSource.scala:953) at com.amazonaws.services.glue.DataSource.getDynamicFrame(DataSource.scala:99) at com.amazonaws.services.glue.DataSource.getDynamicFrame$(DataSource.scala:99) at com.amazonaws.services.glue.SparkSQLDataSource.getDynamicFrame(DataSource.scala:714) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at py4j.Gateway.invoke(Gateway.java:282) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.GatewayConnection.run(GatewayConnection.java:238) at java.lang.Thread.run(Thread.java:750) ``` I'm not sure how to solve this problem. I see in the blog post its mentioned that I can pass in extra libraries, however when I look in `/home/glue_user/aws-glue-libs/jars` I can see a jar named `mssql-jdbc-7.0.0.jre8.jar` so I'm not so sure thats the problem. I should mention this job runs without a problem when deployed to AWS. I'm currently starting up the `amazon/aws-glue-libs:glue_libs_3.0.0_image_01` using a very basic docker-compose file ``` version: "3.8" services: glue: container_name: "glue-local-development" image: amazon/aws-glue-libs:glue_libs_3.0.0_image_01 ports: - "4040:4040" - "18080:18080" environment: - DISABLE_SSL=true - AWS_PROFILE=my_profile volumes: - ~/.aws:/home/glue_user/.aws - ${PWD}:/home/glue_user/workspace/ stdin_open: true ``` Then connecting as per the blog post. Is there something else I have to do here? I don't think I should have to manually load in the mysql jars? I've been stuck at this point for awhile so would really appreciate any help or suggestions people have Edit: Interestingly when I attempt to run `amazon/aws-glue-libs:glue_libs_2.0.0_image_01` it fails with a very similar but different error ``` : An error occurred while calling o49.getDynamicFrame. : java.io.FileNotFoundException: (No such file or directory) at java.io.FileInputStream.open0(Native Method) at java.io.FileInputStream.open(FileInputStream.java:195) at java.io.FileInputStream.<init>(FileInputStream.java:138) at com.amazonaws.glue.jdbc.commons.CustomCertificateManager.importCustomJDBCCert(CustomCertificateManager.java:127) at com.amazonaws.services.glue.util.JDBCWrapper$.connectionProperties(JDBCUtils.scala:947) at com.amazonaws.services.glue.util.JDBCWrapper.connectionProperties$lzycompute(JDBCUtils.scala:734) at com.amazonaws.services.glue.util.JDBCWrapper.connectionProperties(JDBCUtils.scala:734) at com.amazonaws.services.glue.util.JDBCWrapper.getRawConnection(JDBCUtils.scala:747) at com.amazonaws.services.glue.JDBCDataSource.getPrimaryKeys(DataSource.scala:996) at com.amazonaws.services.glue.JDBCDataSource$$anonfun$33.apply(DataSource.scala:868) at com.amazonaws.services.glue.JDBCDataSource$$anonfun$33.apply(DataSource.scala:868) at scala.collection.MapLike$class.getOrElse(MapLike.scala:128) at scala.collection.AbstractMap.getOrElse(Map.scala:59) at com.amazonaws.services.glue.JDBCDataSource.getJdbcJobBookmark(DataSource.scala:868) at com.amazonaws.services.glue.JDBCDataSource.getDynamicFrame(DataSource.scala:943) at com.amazonaws.services.glue.DataSource$class.getDynamicFrame(DataSource.scala:97) at com.amazonaws.services.glue.SparkSQLDataSource.getDynamicFrame(DataSource.scala:707) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at py4j.Gateway.invoke(Gateway.java:282) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.GatewayConnection.run(GatewayConnection.java:238) at java.lang.Thread.run(Thread.java:750) ```
1
answers
0
votes
17
views
asked 2 days ago

Recent articles

see all
1/18