By using AWS re:Post, you agree to the Terms of Use

Compute

Whether you are building enterprise, cloud-native or mobile apps, or running massive data clusters using AWS Compute services, AWS provides services that support virtually any workload. Work with AWS Compute services to develop, deploy, run, and scale your applications and workloads.

Recent questions

see all
1/18

Webclient SDK Angular Integration

I am currently looking to include the NICE DCV Webclient SDK into an **angular** project but have stumbled onto a problem: With the default setup the following error is displayed in the browser: ``` DOMException: Failed to execute 'importScripts' on 'WorkerGlobalScope': The script at 'https://<server-url>/recon/dcv/dcvjs/dcv/broadwayh264decoder-worker.js' failed to load. ``` The same is true for the file *lz4decoder-worker.js*. It seems that the way angular generates the url-path is causing this problem and is not providing the source files in the location the dcv.js file expects them to be. The html-file that contains the dcv-viewer div component is located at 'https://<server-url>/recon/dcv/' and provided by angular. It is now working using a workaround. In the **dcv.js** there is a line where the location of the source files seems to be generated with the baseurl of the site and then appending 'dcvjs': ``` ...?e.baseUrl.replace(/^\/+|\/+$/g,""):"dcvjs",!new RegExp("^http","i")... ``` Replacing "dcvjs" with the correct url directly ("https://<server-url>/recon/assets/dcvjs" Using this, the application can now correctly locate the necessary .js files. This is, however, for obvious reasons a rather dirty solution to the problem. We would have to replace the static server-url adress manually for each environment. So now my question is, is it somehow possible to configure the project in a way where the resources are correctly provided to the application in angular? Further, I have noticed that the sample application of AWS AppStream 2.0 uses NICE DCV in an angular context. Is it possible to provide an angular plugin similar to the react component already included in the SDK? The perfect solution of course would be to be able to install the SDK using npm :) Thanks in advance, Julian
1
answers
0
votes
7
views
asked 4 hours ago

Cannot add environment variable through Ebextensions

I'm using .ebextensions to create VPCEndpoints so in the **Resources** section I've addded the needed section for the VPCEndpoint. Then after that in the **option_settings** section I'm trying to add an environment variable in my elastic beanstalk application referencing the created VPCEndpoint, but when i check the environment variables from the elastic beanstalk console the value is added as a plain text not the Ref of the VPCEndpoint (Check the screenshot) So how can i make it interpret the Ref of the endpoint ? ![Enter image description here](/media/postImages/original/IMkQEkAlsLRyCG5pYs1i8hkA) ``` Resources: NewsonarVPCEndpoint: Type: AWS::EC2::VPCEndpoint Properties: PrivateDnsEnabled: false SecurityGroupIds: - {"Fn::GetOptionSetting": {"Namespace": "aws:elasticbeanstalk:application:environment", "OptionName": "ALLOW_INBOUND_FROM_VPC_SECURITY_GROUP", "DefaultValue": "default_value"}} ServiceName: { "Fn::Join": [ "", [ "com.amazonaws.vpce.",{"Fn::GetOptionSetting": {"Namespace": "aws:elasticbeanstalk:application:environment", "OptionName": "AWS_REGION", "DefaultValue": "us-east-1"}},".",{"Ref": "sonarVPCEndpointService"}]] } SubnetIds: - { "Ref": "Subnet1Id" } - { "Ref": "Subnet2Id" } - { "Ref": "Subnet3Id" } VpcEndpointType: Interface VpcId: { "Ref": "VpcId" } option_settings: aws:elasticbeanstalk:application:environment: VPC_ENDPOINT: '`{"Ref" : "NewsonarVPCEndpoint"}`' ```
0
answers
0
votes
4
views
asked 5 hours ago

AWS Pytorch Neuron Compliation Error

I followed user guide on updating torch neuron and then started compiling the model to neuron. But got an error, from which I don't understand what's wrong. In Neuron SDK you claim that it should compile all operations, even not supported ones, they just should run on CPU. The error: ``` INFO:Neuron:All operators are compiled by neuron-cc (this does not guarantee that neuron-cc will successfully compile) INFO:Neuron:Number of arithmetic operators (pre-compilation) before = 3345, fused = 3345, percent fused = 100.0% INFO:Neuron:Number of neuron graph operations 8175 did not match traced graph 9652 - using heuristic matching of hierarchical information INFO:Neuron:Compiling function _NeuronGraph$3362 with neuron-cc INFO:Neuron:Compiling with command line: '/home/ubuntu/alias/neuron/neuron_env/bin/neuron-cc compile /tmp/tmpmp8qvhtb/graph_def.pb --framework TENSORFLOW --pipeline compile SaveTemps --output /tmp/tmpmp8qvhtb/graph_def.neff --io-config {"inputs": {"0:0": [[1, 3, 768, 768], "float32"]}, "outputs": ["aten_sigmoid/Sigmoid:0"]} --verbose 35' ..............................................................................INFO:Neuron:Compile command returned: -9 WARNING:Neuron:torch.neuron.trace failed on _NeuronGraph$3362; falling back to native python function call ERROR:Neuron:neuron-cc failed with the following command line call: /home/ubuntu/alias/neuron/neuron_env/bin/neuron-cc compile /tmp/tmpmp8qvhtb/graph_def.pb --framework TENSORFLOW --pipeline compile SaveTemps --output /tmp/tmpmp8qvhtb/graph_def.neff --io-config '{"inputs": {"0:0": [[1, 3, 768, 768], "float32"]}, "outputs": ["aten_sigmoid/Sigmoid:0"]}' --verbose 35 Traceback (most recent call last): File "/home/ubuntu/alias/neuron/neuron_env/lib/python3.7/site-packages/torch_neuron/convert.py", line 382, in op_converter item, inputs, compiler_workdir=sg_workdir, **kwargs) File "/home/ubuntu/alias/neuron/neuron_env/lib/python3.7/site-packages/torch_neuron/decorators.py", line 220, in trace 'neuron-cc failed with the following command line call:\n{}'.format(command)) subprocess.SubprocessError: neuron-cc failed with the following command line call: /home/ubuntu/alias/neuron/neuron_env/bin/neuron-cc compile /tmp/tmpmp8qvhtb/graph_def.pb --framework TENSORFLOW --pipeline compile SaveTemps --output /tmp/tmpmp8qvhtb/graph_def.neff --io-config '{"inputs": {"0:0": [[1, 3, 768, 768], "float32"]}, "outputs": ["aten_sigmoid/Sigmoid:0"]}' --verbose 35 INFO:Neuron:Number of arithmetic operators (post-compilation) before = 3345, compiled = 0, percent compiled = 0.0% INFO:Neuron:The neuron partitioner created 1 sub-graphs INFO:Neuron:Neuron successfully compiled 0 sub-graphs, Total fused subgraphs = 1, Percent of model sub-graphs successfully compiled = 0.0% INFO:Neuron:Compiled these operators (and operator counts) to Neuron: INFO:Neuron:Not compiled operators (and operator counts) to Neuron: INFO:Neuron: => aten::Int: 942 [supported] INFO:Neuron: => aten::_convolution: 107 [supported] INFO:Neuron: => aten::add: 104 [supported] INFO:Neuron: => aten::batch_norm: 1 [supported] INFO:Neuron: => aten::cat: 1 [supported] INFO:Neuron: => aten::contiguous: 4 [supported] INFO:Neuron: => aten::div: 104 [supported] INFO:Neuron: => aten::dropout: 208 [supported] INFO:Neuron: => aten::feature_dropout: 1 [supported] INFO:Neuron: => aten::flatten: 60 [supported] INFO:Neuron: => aten::gelu: 52 [supported] INFO:Neuron: => aten::layer_norm: 161 [supported] INFO:Neuron: => aten::linear: 264 [supported] INFO:Neuron: => aten::matmul: 104 [supported] INFO:Neuron: => aten::mul: 52 [supported] INFO:Neuron: => aten::permute: 210 [supported] INFO:Neuron: => aten::relu: 1 [supported] INFO:Neuron: => aten::reshape: 262 [supported] INFO:Neuron: => aten::select: 104 [supported] INFO:Neuron: => aten::sigmoid: 1 [supported] INFO:Neuron: => aten::size: 278 [supported] INFO:Neuron: => aten::softmax: 52 [supported] INFO:Neuron: => aten::transpose: 216 [supported] INFO:Neuron: => aten::upsample_bilinear2d: 4 [supported] INFO:Neuron: => aten::view: 52 [supported] Traceback (most recent call last): File "to_neuron.py", line 14, in <module> model_neuron = torch.neuron.trace(model, example_inputs=[image.cuda()]) File "/home/ubuntu/alias/neuron/neuron_env/lib/python3.7/site-packages/torch_neuron/convert.py", line 184, in trace cu.stats_post_compiler(neuron_graph) File "/home/ubuntu/alias/neuron/neuron_env/lib/python3.7/site-packages/torch_neuron/convert.py", line 493, in stats_post_compiler "No operations were successfully partitioned and compiled to neuron for this model - aborting trace!") RuntimeError: No operations were successfully partitioned and compiled to neuron for this model - aborting trace! ```
0
answers
0
votes
3
views
asked a day ago

Trouble with AWS lambda runtime API with docker image

## Short version I am running a lambda function in a docker container, and all executions are marked as failures with a Runtime.ExitError, even though I am using the runtime API and the lambda added as on_success destination is running. ## Longer version, with context I have a setup with a bunch of functions chained using API invocations and destinations. One of them requires a custom runtime (handler is a PHP command), I have been using a docker image for that. In order to get it running correctly, I am getting the request ID in the entrypoint, and in the command, running both my command and a curl to the runtime API, like so: ``` CMD ["/bin/bash", "-c", "/app/bin/my-super-command && curl --silent -X POST \"http://${AWS_LAMBDA_RUNTIME_API}/2018-06-01/runtime/invocation/${REQUEST_ID}/response\" -d 'SUCCESS'"] ``` I know the request id is correct (I am printing it in the entrypoint), and at the end of the logs, I am getting the following lines (edited of course): ``` End of my-super-command {"status":"OK"} END RequestId: 123456-abcd-1234-abcd-12345678910 REPORT RequestId: 123456-abcd-1234-abcd-12345678910 Duration: 39626.80 ms Billed Duration: 39777 ms Memory Size: 384 MB Max Memory Used: 356 MB Init Duration: 149.26 ms RequestId: 123456-abcd-1234-abcd-12345678910 Error: Runtime exited without providing a reason Runtime.ExitError Beginning of the entrypoint ``` The first line is from my command, the second line looks is the output from the curl (it looks like a success, and the API documentation seems to agree with me), but as we can see, the call seems to be marked as failed later. The weird stuff: * The lambda logs a failure even though the Runtime API returns an OK to my call for success * The lambda is marked as failed in the monitoring * The function I put after this one in the workflow, in a destination, with the `on_success` condition, runs ! The problems I have had, and then processed: * I am getting the request id with a combination of grep/sed/trim because there's a \r somewhere, that's not optimal but I am printing it and appears correctly (I have printed the full curl command too, just in case) * I have had issues with timeout/OOM, but as you can see above, it is not the case here. Am I missing something here ? Maybe I did not understand the usage of the runtime API. As you can see the next run seems to be launched but interrupted, so there might be some timing issue.
0
answers
0
votes
13
views
asked a day ago

Recent articles

see all
1/3

Popular users

see all
1/18

Learn AWS faster by following popular topics

1/2