By using AWS re:Post, you agree to the Terms of Use
/All/
All Questions
Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Not able to convert Hugging Face fine-tuned BERT model into AWS Neuron

Hi Team, I have a fine-tuned BERT model which was trained using following libraries. torch == 1.8.1+cu111 transformers == 4.19.4 And not able to convert that fine-tuned BERT model into AWS neuron and getting following compilation errors. Could you please help me to resolve this issue? **Note:** Trying to compile BERT model on SageMaker notebook instance and with "conda_python3" conda environment. **Installation:** #### Set Pip repository to point to the Neuron repository !pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com #### Install Neuron PyTorch - Note: Tried both options below. "#!pip install torch-neuron==1.8.1.* neuron-cc[tensorflow] "protobuf<4" torchvision sagemaker>=2.79.0 transformers==4.17.0 --upgrade" !pip install --upgrade torch-neuron neuron-cc[tensorflow] "protobuf<4" torchvision --------------------------------------------------------------------------------------------------------------------------------------------------- **Model compilation:** ``` import os import tensorflow # to workaround a protobuf version conflict issue import torch import torch.neuron from transformers import AutoTokenizer, AutoModelForSequenceClassification model_path = 'model/' # Model artifacts are stored in 'model/' directory # load tokenizer and model tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForSequenceClassification.from_pretrained(model_path, torchscript=True) # create dummy input for max length 128 dummy_input = "dummy input which will be padded later" max_length = 128 embeddings = tokenizer(dummy_input, max_length=max_length, padding="max_length", truncation=True, return_tensors="pt") neuron_inputs = tuple(embeddings.values()) # compile model with torch.neuron.trace and update config model_neuron = torch.neuron.trace(model, neuron_inputs) model.config.update({"traced_sequence_length": max_length}) # save tokenizer, neuron model and config for later use save_dir="tmpd" os.makedirs("tmpd",exist_ok=True) model_neuron.save(os.path.join(save_dir,"neuron_model.pt")) tokenizer.save_pretrained(save_dir) model.config.save_pretrained(save_dir) ``` --------------------------------------------------------------------------------------------------------------------------------------------------- **Model artifacts:** We have got this model artifacts from multi-label topic classification model. config.json model.tar.gz pytorch_model.bin special_tokens_map.json tokenizer_config.json tokenizer.json --------------------------------------------------------------------------------------------------------------------------------------------------- **Error logs:** ``` INFO:Neuron:There are 3 ops of 1 different types in the TorchScript that are not compiled by neuron-cc: aten::embedding, (For more information see https://github.com/aws/aws-neuron-sdk/blob/master/release-notes/neuron-cc-ops/neuron-cc-ops-pytorch.md) INFO:Neuron:Number of arithmetic operators (pre-compilation) before = 565, fused = 548, percent fused = 96.99% INFO:Neuron:Number of neuron graph operations 1601 did not match traced graph 1323 - using heuristic matching of hierarchical information WARNING:tensorflow:From /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages/torch_neuron/ops/aten.py:2022: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.where in 2.0, which has the same broadcast rule as np.where INFO:Neuron:Compiling function _NeuronGraph$698 with neuron-cc INFO:Neuron:Compiling with command line: '/home/ec2-user/anaconda3/envs/python3/bin/neuron-cc compile /tmp/tmpv4gg13ze/graph_def.pb --framework TENSORFLOW --pipeline compile SaveTemps --output /tmp/tmpv4gg13ze/graph_def.neff --io-config {"inputs": {"0:0": [[1, 128, 768], "float32"], "1:0": [[1, 1, 1, 128], "float32"]}, "outputs": ["Linear_5/aten_linear/Add:0"]} --verbose 35' INFO:Neuron:Compile command returned: -9 WARNING:Neuron:torch.neuron.trace failed on _NeuronGraph$698; falling back to native python function call ERROR:Neuron:neuron-cc failed with the following command line call: /home/ec2-user/anaconda3/envs/python3/bin/neuron-cc compile /tmp/tmpv4gg13ze/graph_def.pb --framework TENSORFLOW --pipeline compile SaveTemps --output /tmp/tmpv4gg13ze/graph_def.neff --io-config '{"inputs": {"0:0": [[1, 128, 768], "float32"], "1:0": [[1, 1, 1, 128], "float32"]}, "outputs": ["Linear_5/aten_linear/Add:0"]}' --verbose 35 Traceback (most recent call last): File "/home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages/torch_neuron/convert.py", line 382, in op_converter item, inputs, compiler_workdir=sg_workdir, **kwargs) File "/home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages/torch_neuron/decorators.py", line 220, in trace 'neuron-cc failed with the following command line call:\n{}'.format(command)) subprocess.SubprocessError: neuron-cc failed with the following command line call: /home/ec2-user/anaconda3/envs/python3/bin/neuron-cc compile /tmp/tmpv4gg13ze/graph_def.pb --framework TENSORFLOW --pipeline compile SaveTemps --output /tmp/tmpv4gg13ze/graph_def.neff --io-config '{"inputs": {"0:0": [[1, 128, 768], "float32"], "1:0": [[1, 1, 1, 128], "float32"]}, "outputs": ["Linear_5/aten_linear/Add:0"]}' --verbose 35 INFO:Neuron:Number of arithmetic operators (post-compilation) before = 565, compiled = 0, percent compiled = 0.0% INFO:Neuron:The neuron partitioner created 1 sub-graphs INFO:Neuron:Neuron successfully compiled 0 sub-graphs, Total fused subgraphs = 1, Percent of model sub-graphs successfully compiled = 0.0% INFO:Neuron:Compiled these operators (and operator counts) to Neuron: INFO:Neuron:Not compiled operators (and operator counts) to Neuron: INFO:Neuron: => aten::Int: 97 [supported] INFO:Neuron: => aten::add: 39 [supported] INFO:Neuron: => aten::contiguous: 12 [supported] INFO:Neuron: => aten::div: 12 [supported] INFO:Neuron: => aten::dropout: 38 [supported] INFO:Neuron: => aten::embedding: 3 [not supported] INFO:Neuron: => aten::gelu: 12 [supported] INFO:Neuron: => aten::layer_norm: 25 [supported] INFO:Neuron: => aten::linear: 74 [supported] INFO:Neuron: => aten::matmul: 24 [supported] INFO:Neuron: => aten::mul: 1 [supported] INFO:Neuron: => aten::permute: 48 [supported] INFO:Neuron: => aten::rsub: 1 [supported] INFO:Neuron: => aten::select: 1 [supported] INFO:Neuron: => aten::size: 97 [supported] INFO:Neuron: => aten::slice: 5 [supported] INFO:Neuron: => aten::softmax: 12 [supported] INFO:Neuron: => aten::tanh: 1 [supported] INFO:Neuron: => aten::to: 1 [supported] INFO:Neuron: => aten::transpose: 12 [supported] INFO:Neuron: => aten::unsqueeze: 2 [supported] INFO:Neuron: => aten::view: 48 [supported] --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-1-97bba321d013> in <module> 18 19 # compile model with torch.neuron.trace and update config ---> 20 model_neuron = torch.neuron.trace(model, neuron_inputs) 21 model.config.update({"traced_sequence_length": max_length}) 22 ~/anaconda3/envs/python3/lib/python3.6/site-packages/torch_neuron/convert.py in trace(func, example_inputs, fallback, op_whitelist, minimum_segment_size, subgraph_builder_function, subgraph_inputs_pruning, skip_compiler, debug_must_trace, allow_no_ops_on_neuron, compiler_workdir, dynamic_batch_size, compiler_timeout, _neuron_trace, compiler_args, optimizations, verbose, **kwargs) 182 logger.debug("skip_inference_context - trace with fallback at {}".format(get_file_and_line())) 183 neuron_graph = cu.compile_fused_operators(neuron_graph, **compile_kwargs) --> 184 cu.stats_post_compiler(neuron_graph) 185 186 # Wrap the compiled version of the model in a script module. Note that this is ~/anaconda3/envs/python3/lib/python3.6/site-packages/torch_neuron/convert.py in stats_post_compiler(self, neuron_graph) 491 if succesful_compilations == 0 and not self.allow_no_ops_on_neuron: 492 raise RuntimeError( --> 493 "No operations were successfully partitioned and compiled to neuron for this model - aborting trace!") 494 495 if percent_operations_compiled < 50.0: RuntimeError: No operations were successfully partitioned and compiled to neuron for this model - aborting trace! ``` --------------------------------------------------------------------------------------------------------------------------------------------------- Thanks a lot.
0
answers
0
votes
4
views
asked 2 hours ago

EMR Studio - Can you import local code into a notebook?

We are trying to use EMR Studio as a development environment for a medium complexity project which has the code split out into multiple files for testing and maintainability. There's simply too much code to have in one long file. I cannot work out how to import local code into a notebook to run or test it. ## Example layout Here is a simplified example (our project is much larger): ```shell my_notebook.ipynb my_project/ __init__.py model.py report.py ``` In the notebook we might have a cell like: ```python from my_project.model import DataModel from my_project.report import Report report = Report(DataModel(spark)) report.show() ``` The current result is: ``` An error was encountered: No module named 'my_project' Traceback (most recent call last): ModuleNotFoundError: No module named 'my_project' ``` Is this possible? ## Execution environment It appears that the Python execution environment and the shell environment are completely separate, and the current directory is not available to the Python interpreter: | Execution environment | Key | Value | | -----|---------|------| | Python | User | `livy` | | Python | Current working dir | `/mnt/var/lib/livy` | | `%%sh` | User | `emr-notebook` | | `%%sh` | Current working dir | `/home/emr-notebook/e-<HEX_STRING>` | The `/home/emr-notebook/...` dir appears to contain our code, but the `livy` user which we appear to be running as doesn't permission to look at it. So even if we could guess the CWD and add it to the Python path it appears Python would not have permissions to read the code.
0
answers
0
votes
5
views
asked 3 hours ago

[Greengrass][IDT] question for Greengrass v2 IDT testing error on component/mqtt tests on our AIOT device

Hi, Sir, I got the greengrass v2 IDT testing errors on component/mqtt tests (had passed on coredependencies/version/pretestvalidation tests), and I'm wondering if it's the java env compatibility issue (I checked there's nearly the same question asked 2 months ago, but no positive fix/answer in the thread => https://repost.aws/questions/QU1ZPN9d6sSPGjetFnH88uuw/greengrass-idt-test-component-and-mqtt-failed-with-general-info) My testing env (just enable the mandatory tests): 1. IDT running host: Windows 10, OpenJDK for Windows 17.0.3. 2. DUT: MediaTek SoC running Yotco 3.1 with OpenJDK 13.0.5 and greengrass v2 core 2.5.5. the test manager summary is: ========== Test Summary ========== Execution Time: 16m31s Tests Completed: 7 Tests Passed: 3 Tests Failed: 4 Tests Skipped: 0 ---------------------------------- Test Groups: coredependencies: PASSED version: PASSED component: FAILED mqtt: FAILED lambdadeployment: FAILED pretestvalidation: PASSED I got totally the same java exception error and can NOT continue the testing on the cloud component and mqtt test as below. It looks like the java jar running compatibility problem with the Java run-time platform. Do you have any experience and give me the hint or guide for fixing this?? Thanks, A.J. ====================================================================================================================== 2022-6月-16 13:05:28,468 [mqtt] [idt-11c34a9894540dc4e465] [ERROR] greengrass/features/mqtt.feature - Failed at step: 'my device is running Greengrass' com.google.inject.ConfigurationException: Guice configuration errors: 1) [Guice/ErrorInUserCode]: Unable to method intercept: GreengrassSteps while locating GreengrassSteps 1 error ====================== Full classname legend: ====================== GreengrassSteps: "com.aws.greengrass.testing.features.GreengrassSteps" ======================== End of classname legend: ======================== at com.google.inject.internal.InjectorImpl.getProvider(InjectorImpl.java:1126) ~[AWSGreengrassV2TestingIDT-1.0.jar:?] at com.google.inject.internal.InjectorImpl.getProvider(InjectorImpl.java:1086) ~[AWSGreengrassV2TestingIDT-1.0.jar:?] at com.google.inject.internal.InjectorImpl.getInstance(InjectorImpl.java:1138) ~[AWSGreengrassV2TestingIDT-1.0.jar:?] at io.cucumber.guice.GuiceFactory.getInstance(GuiceFactory.java:43) ~[AWSGreengrassV2TestingIDT-1.0.jar:?] at io.cucumber.java.JavaStepDefinition.execute(JavaStepDefinition.java:27) ~[AWSGreengrassV2TestingIDT-1.0.jar:?] at io.cucumber.core.runner.PickleStepDefinitionMatch.runStep(PickleStepDefinitionMatch.java:63) ~[AWSGreengrassV2TestingIDT-1.0.jar:?] at io.cucumber.core.runner.TestStep.executeStep(TestStep.java:64) ~[AWSGreengrassV2TestingIDT-1.0.jar:?] at io.cucumber.core.runner.TestStep.run(TestStep.java:49) ~[AWSGreengrassV2TestingIDT-1.0.jar:?] at io.cucumber.core.runner.PickleStepTestStep.run(PickleStepTestStep.java:46) ~[AWSGreengrassV2TestingIDT-1.0.jar:?] at io.cucumber.core.runner.TestCase.run(TestCase.java:51) ~[AWSGreengrassV2TestingIDT-1.0.jar:?] at io.cucumber.core.runner.Runner.runPickle(Runner.java:67) ~[AWSGreengrassV2TestingIDT-1.0.jar:?] at io.cucumber.core.runtime.Runtime.lambda$run$2(Runtime.java:100) ~[AWSGreengrassV2TestingIDT-1.0.jar:?] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) ~[?:?] at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?] at io.cucumber.core.runtime.Runtime$SameThreadExecutorService.execute(Runtime.java:243) ~[AWSGreengrassV2TestingIDT-1.0.jar:?] at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:123) ~[?:?] at io.cucumber.core.runtime.Runtime.lambda$run$3(Runtime.java:100) ~[AWSGreengrassV2TestingIDT-1.0.jar:?] at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197) [?:?] at java.util.stream.SliceOps$1$1.accept(SliceOps.java:200) [?:?] at java.util.ArrayList$ArrayListSpliterator.tryAdvance(ArrayList.java:1602) [?:?] at java.util.stream.ReferencePipeline.forEachWithCancel(ReferencePipeline.java:129) [?:?] at java.util.stream.AbstractPipeline.copyIntoWithCancel(AbstractPipeline.java:527) [?:?] at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:513) [?:?] at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499) [?:?] at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:921) [?:?] at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) [?:?] at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:682) [?:?] at io.cucumber.core.runtime.Runtime.run(Runtime.java:101) [AWSGreengrassV2TestingIDT-1.0.jar:?] at com.aws.greengrass.testing.launcher.TestLauncher.main(TestLauncher.java:130) [AWSGreengrassV2TestingIDT-1.0.jar:?] Caused by: java.lang.IllegalArgumentException: Constructor is not visible at com.google.common.base.Preconditions.checkArgument(Preconditions.java:142) ~[AWSGreengrassV2TestingIDT-1.0.jar:?] at com.google.inject.internal.BytecodeGen.enhancedConstructor(BytecodeGen.java:113) ~[AWSGreengrassV2TestingIDT-1.0.jar:?] at com.google.inject.internal.ProxyFactory$ProxyConstructor.<init>(ProxyFactory.java:175) ~[AWSGreengrassV2TestingIDT-1.0.jar:?] at com.google.inject.internal.ProxyFactory.create(ProxyFactory.java:151) ~[AWSGreengrassV2TestingIDT-1.0.jar:?] at com.google.inject.internal.ConstructorInjectorStore.createConstructor(ConstructorInjectorStore.java:93) ~[AWSGreengrassV2TestingIDT-1.0.jar:?] at com.google.inject.internal.ConstructorInjectorStore.access$000(ConstructorInjectorStore.java:30) ~[AWSGreengrassV2TestingIDT-1.0.jar:?] at com.google.inject.internal.ConstructorInjectorStore$1.create(ConstructorInjectorStore.java:38) ~[AWSGreengrassV2TestingIDT-1.0.jar:?] at com.google.inject.internal.ConstructorInjectorStore$1.create(ConstructorInjectorStore.java:34) ~[AWSGreengrassV2TestingIDT-1.0.jar:?] at com.google.inject.internal.FailableCache$1.load(FailableCache.java:43) ~[AWSGreengrassV2TestingIDT-1.0.jar:?] at com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3529) ~[AWSGreengrassV2TestingIDT-1.0.jar:?] at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2278) ~[AWSGreengrassV2TestingIDT-1.0.jar:?] at com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2155) ~[AWSGreengrassV2TestingIDT-1.0.jar:?] at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2045) ~[AWSGreengrassV2TestingIDT-1.0.jar:?] at com.google.common.cache.LocalCache.get(LocalCache.java:3951) ~[AWSGreengrassV2TestingIDT-1.0.jar:?] at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3974) ~[AWSGreengrassV2TestingIDT-1.0.jar:?] at com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4935) ~[AWSGreengrassV2TestingIDT-1.0.jar:?] at com.google.common.cache.LocalCache$LocalLoadingCache.getUnchecked(LocalCache.java:4941) ~[AWSGreengrassV2TestingIDT-1.0.jar:?] at com.google.inject.internal.FailableCache.get(FailableCache.java:54) ~[AWSGreengrassV2TestingIDT-1.0.jar:?] at com.google.inject.internal.ConstructorInjectorStore.get(ConstructorInjectorStore.java:49) ~[AWSGreengrassV2TestingIDT-1.0.jar:?] at com.google.inject.internal.ConstructorBindingImpl.initialize(ConstructorBindingImpl.java:148) ~[AWSGreengrassV2TestingIDT-1.0.jar:?] at com.google.inject.internal.InjectorImpl.initializeJitBinding(InjectorImpl.java:606) ~[AWSGreengrassV2TestingIDT-1.0.jar:?] at com.google.inject.internal.InjectorImpl.createJustInTimeBinding(InjectorImpl.java:943) ~[AWSGreengrassV2TestingIDT-1.0.jar:?] at com.google.inject.internal.InjectorImpl.createJustInTimeBindingRecursive(InjectorImpl.java:863) ~[AWSGreengrassV2TestingIDT-1.0.jar:?] at com.google.inject.internal.InjectorImpl.getJustInTimeBinding(InjectorImpl.java:301) ~[AWSGreengrassV2TestingIDT-1.0.jar:?] at com.google.inject.internal.InjectorImpl.getBindingOrThrow(InjectorImpl.java:224) ~[AWSGreengrassV2TestingIDT-1.0.jar:?] at com.google.inject.internal.InjectorImpl.getProviderOrThrow(InjectorImpl.java:1092) ~[AWSGreengrassV2TestingIDT-1.0.jar:?] at com.google.inject.internal.InjectorImpl.getProvider(InjectorImpl.java:1121) ~[AWSGreengrassV2TestingIDT-1.0.jar:?] ... 28 more ======================================================================================================================
0
answers
0
votes
5
views
asked 8 hours ago
0
answers
0
votes
10
views
asked 8 hours ago

Scaling AWS step-functions and comprehend jobs with Concurrent active asynchronous jobs quota

1. I am trying to implement a solution that integrates aws comprehend targeted sentiment along with step functions. And then make it public for people to use it as an api. 2. I need to wait until the job is complete before being able to move forward with the workflow. Since the comprehend job is asynchronous, **I created a wait time poller to periodically check the jobs status** using describe_targeted_sentiment_detection_job. Following a similar integration pattern as this https://docs.aws.amazon.com/step-functions/latest/dg/sample-project-job-poller.html. 3. However, there is seems to be a **Concurrent active asynchronous jobs quota of 10 jobs** according to https://docs.aws.amazon.com/comprehend/latest/dg/guidelines-and-limits.html#limits-active-jobs. If this is the case, I was thinking of **creating another poll to check if comprehend is free to do targeted sentiment before starting another comprehend job** 4. Given that the step functions charge for each polling cycle. And that there is a concurrent job limit of 10. I am worried about the backlog and respective costs that may be created if many step-function executions were to be started. For example, if 1000 workflows are started. Workflow number 1000 will have to be polling for an available comprehend job for a long time. Does anyone know if a solution is available to get around the concurrent active asynchronous jobs quota or to reduce the cost of step functions continually polling for a long time?
1
answers
0
votes
14
views
asked a day ago

Simple IoT Core thing rule and SiteWise property ingestion config

Hi, I am seeking some basic plain english assistance please. Any assistance will be great appreciated. I have a temperature sensor Thing receiving MQTT data. * The Things name is: TestThing * The data includes 4 fields: "TS", "datetime", "class", "data" * The published Topic is: TestTopic * The IoT Core MQTT Test Client payload displays as: {"TS": 1656941515, "datetime": 04/07/2022 23:31:55, "class": Temperature, "data": 11} I do not know how to fill out the IoT Core rule correctly, and I am also not sure if the SiteWise Asset propertyAlias is correct. Based on what i have read (see below), the latest failure is: **IoT Core Rule config UI** * SQL Statement: SELECT * FROM 'TestTopic' * Property Alias: TestTopic * Time in seconds: ${TS} * Data type: DOUBLE * Value: ${data} * IAM role: created via the UI - create new role I have used Data type: DOUBLE although i note the incoming data is seen within IoT Analytics as integer. The SiteWise model Measurement definitions is also set to DOUBLE **SiteWise config** * Model Measurement definitions: Temperature * Asset Measurement: Temperature * Asset Measurement - Temperature field (enter a property alias): TestTopic I believe i have created the model and asset correctly. Although, I am not sure if i understand the what exactly is the Property, and thus the propertyAlias. based on the above, I believe i created the propertyAlias for the Temperature field as: TestTopic **Setup and read history** The setup is super simple for testing. A single IoT thing, pulling in MQTT topic data. The Thing was created as a single Thing with certificates. A MODBUS sensor connected to a gateway with MQTT. No OPC server, no Greengrass, no Lora. IoT Analytics and AWS QuickSight services can access the data. I have read: 1) all the AWS suggested questions, 2) the manual - docs pages on SiteWise IoT Core data ingestion, including the tutorial section, 3) the AWS workshop pages, 4) watch every wonderful Youtube on the topic (not many). Key articles read: https://docs.aws.amazon.com/iot-sitewise/latest/userguide/ingest-data-from-iot-things.html https://docs.aws.amazon.com/iot/latest/developerguide/iotsitewise-rule-action.html https://docs.aws.amazon.com/iot-sitewise/latest/userguide/connect-data-streams.html https://docs.aws.amazon.com/iot-sitewise/latest/userguide/iot-rules.html https://iot-sitewise.workshop.aws/en/40_aws-iot-sitewise-data-ingestion.html https://repost.aws/tags/TAGaSyCvg-SI2w6FYqm1H2RQ/aws-io-t-site-wise
1
answers
0
votes
11
views
asked a day ago

AppStream2 User gets Blank Screen

I am completely new to AppStream and AWS having used MS Azure for everything up until now. Today I created an application using Elastic Fleet in AppStream2. I created a VHD file, uploaded it to a new S3 bucket and set JSON permissions from a document to allow the service to see the VHD file and script: ``` { "Version": "2012-10-17", "Statement": [ { "Sid": "AllowAppStream2.0ToRetrieveObjects", "Effect": "Allow", "Principal": { "Service": "appstream.amazonaws.com" }, "Action": "s3:GetObject", "Resource": [ "arn:aws:s3:::my-app/appdrive2.vhdx", "arn:aws:s3:::my-app/mount-vhd.ps1", "arn:aws:s3:::my-app/icon.png" ] } ] } ``` When the user logs in via the web interface, they see the icon from the S3 storage. When they click on the icon there are messages about reserving the session, setting up, etc; but then after about 2-3 minutes the screen just goes black. As suggested on another post, I set up a new Fleet with Desktop access. There is no C:\AppStream folder, no mount points and I can't see the VHD being mounted or any errors about access or anything. I opened Firefox and tried to use the S3: uri and it cannot download the VHD. I have found various articles talking about different access settings for the S3 and VPC, but I can't understand where I'm supposed to set this up and how it relates to the issue I am experiencing. eg, this link - I can't see how it relates to the various objects in my setup? https://docs.aws.amazon.com/appstream2/latest/developerguide/managing-network-vpce-iam-policy.html
1
answers
0
votes
26
views
asked a day ago

How to get traffic from a public API Gateway to a private one?

I would like to use [private API Gateways](https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-api-endpoint-types.html#api-gateway-api-endpoint-types-private) to organise Lambda functions into microservices, while keeping them invisible from the public internet. I would then like to expose specific calls using a public API Gateway. How do I get traffic from my public API Gateway to a private API Gateway? **What I've looked at so far** In the past, for **container-based resources**, I've used the following pattern: *Internet -> API Gateway -> VPC Link -> VPC[NLB -> ECS]* However, I can't find an equivalent bridge to get specific traffic to a private API Gateway. I.e. *Internet -> API Gateway -> ? -> Private Gateway -> Lambda* My instinct tells me that a network-based solution should exist (equivalent to VPC Link), but so far the only suggestions I've had involve: - Solving using compute ( *Internet -> API Gateway -> VPC[Lambda proxy] -> Private Gateway -> Lambda* ) - Solving using load balancers ( *Internet -> API Gateway -> VPC Link -> VPC[NLB -> ALB] -> Private Gateway -> Lambda* ) Both of these approaches strike me as using the wrong (and expensive!) tools for the job. I.e. Compute where no computation is required and (two!!) load balancers where no load balancing is required (as Lambda effectively self-loadbalances). **Alternative solutions** Perhaps there's a better way (other than a private API Gateway) to organise collections of serverless resources into microservices. I'm attempting to use them to present a like-for-like interface that my container-based microservices would have. E.g. Documented (Open API spec), authentication, traffic monitoring, etc. If using private API Gateways to wrap internal resources into microservices is actually a misuse, and there's a better way to do it, I'm happy to hear it.
1
answers
0
votes
27
views
asked a day ago

I need to attach IAM role to my EC2 instance.

PentestEnvironment-Deployment-Role/octopus is not authorized to perform: iam:PassRole on resource. I have CF template which create Ec2 and Iam role for my env and all this env I create from not-root account. Iam Role for this account it's only main part: { "Sid": "IAM1", "Effect": "Allow", "Action": [ "iam:PassRole" ], "Resource": [ "arn:aws:iam::*:role/Pentest-EC2-Role" ], "Condition": { "StringEquals": { "iam:PassedToService": "ec2.amazonaws.com" }, "StringLike": { "iam:AssociatedResourceARN": [ "arn:aws:ec2:us-west-2:*:instance/*" ] } } }, { "Sid": "IAM2", "Effect": "Allow", "Action": [ "iam:GetRole", "iam:CreateRole", "iam:DeleteRole", "iam:DetachRolePolicy", "iam:AttachRolePolicy", "iam:PutRolePolicy", "iam:GetRolePolicy" ], "Resource": [ "arn:aws:iam::*:role/Pentest-EC2-Role" ] }, { "Sid": "IAM3", "Effect": "Allow", "Action": [ "iam:ListRoles" ], "Resource": [ "*" ] }, { "Sid": "IAM4", "Effect": "Allow", "Action": [ "iam:GetPolicy", "iam:CreatePolicy", "iam:ListPolicyVersions", "iam:CreatePolicyVersion", "iam:DeletePolicy", "iam:DeletePolicyVersion" ], "Resource": [ "arn:aws:iam::*:policy/Pentest-AWS-resources-Access" ] }, { "Sid": "IAM5", "Effect": "Allow", "Action": [ "iam:CreateInstanceProfile", "iam:DeleteInstanceProfile", "iam:RemoveRoleFromInstanceProfile", "iam:AddRoleToInstanceProfile" ], "Resource": "arn:aws:iam::*:instance-profile/Pentest-Instance-Profile" }, { "Sid": "EC2InstanceProfile", "Effect": "Allow", "Action": [ "ec2:DisassociateIamInstanceProfile", "ec2:AssociateIamInstanceProfile", "ec2:ReplaceIamInstanceProfileAssociation" ], "Resource": "arn:aws:ec2:*:*:instance/*" } ] } Why do I have this error?
1
answers
0
votes
25
views
asked a day ago

Connecting Users to AWS Athena and AWS Lake Formation via Tableau Desktop using the Simba Athena JDBC Driver and Okta as Identity Provider

Hello, due to the following Step by Step Guide provided by the official AWS Athena user-guide (Link at the End of the question), it should be possible to connect Tableau Desktop to Athena and Lake Formation via the Simba Athena JDBC Driver using Okta as Idp. The challenge that I am facing right now, is although i followed each step as documented in the Athena user-guide i can not make the connection work. The error message that i recieve whenever i try to connect Tableau Desktop states: > [Simba][AthenaJDBC](100071) An error has been thrown from the AWS Athena client. The security token included in the request is invalid. [Execution ID not available] Invalid Username or Password. My athena.properties file to configure the driver on the Tableau via connection string URL looks as follows (User Name and Password are masked): ``` jdbc:awsathena://AwsRegion=eu-central-1; S3OutputLocation=s3://athena-query-results; AwsCredentialsProviderClass=com.simba.athena.iamsupport.plugin.OktaCredentialsProvider; idp_host=1234.okta.com; User=*****.*****@example.com; Password=******************; app_id=****************************; ssl_insecure=true; okta_mfa_type=oktaverifywithpush; LakeFormationEnabled=true; ``` The configuration settings used in here are from the official Simba Athena JDBC driver documentation (Version: 2.0.31). Furthermore i assigned the required permissions for my users and groups inside Lake Formation as stated in the Step by Step guide linked below. Right now I am not able to point out why I am not able to make the connection work. So I would be very greatful for any support / idea to find a solution on that topic. Best regards Link: https://docs.aws.amazon.com/athena/latest/ug/security-athena-lake-formation-jdbc-okta-tutorial.html#security-athena-lake-formation-jdbc-okta-tutorial-step-1-create-an-okta-account)
0
answers
0
votes
10
views
asked a day ago

In CDK, how do you enable `associatePublicIpAddress` in an AutoScalingGroup that has a `mixedInstancesPolicy`?

I'm using AWS CDK and am trying to enable the associatePublicIpAddress property for an AutoScalingGroup that's using a launch template. My first attempt was to just set `associatePublicIpAddress: true`, but I get this error (https://github.com/aws/aws-cdk/blob/master/packages/%40aws-cdk/aws-autoscaling/lib/auto-scaling-group.ts#L1526-L1528) ```typescript // first attempt new asg.AutoScalingGroup(this, 'ASG', { associatePublicIpAddress: true, // here minCapacity: 1, maxCapacity: 1, vpc, vpcSubnets: { subnetType: SubnetType.PUBLIC, onePerAz: true, availabilityZones: [availabilityZone], }, mixedInstancesPolicy: { instancesDistribution: { spotMaxPrice: '1.00', onDemandPercentageAboveBaseCapacity: 0, }, launchTemplate: new LaunchTemplate(this, 'LaunchTemplate', { securityGroup: this._securityGroup, role, instanceType machineImage, userData: UserData.forLinux(), }), launchTemplateOverrides: [ { instanceType: InstanceType.of( InstanceClass.T4G, InstanceSize.NANO ), }, ], }, keyName, }) ``` ```typescript // I hit this error from the CDK if (props.associatePublicIpAddress) { throw new Error('Setting \'associatePublicIpAddress\' must not be set when \'launchTemplate\' or \'mixedInstancesPolicy\' is set'); } ``` My second attempt was to not set `associatePublicIpAddress` and see if it gets set automatically because the AutoScalingGroup is in a public availablity zone with an internet gateway. However, it still doesn't provision a public ip address. Has anyone been able to create an autoscaling group with a mix instance policy and an associated public ip?
0
answers
0
votes
9
views
asked a day ago

Unable to provision IOT Devices using FleetProvisioningByClaim

I am trying to provision a new device using fleetProvisioningByClaim following https://docs.aws.amazon.com/greengrass/v2/developerguide/fleet-provisioning.html, for this i have all my claim credentials on the device and my iotDataEndpoint/iotCredentialEndpoint/provisioningTemplate/rootCaPath are set and on running the final command `sudo -E java -Droot="/greengrass/v2` i did received Successfully set up Nucleus as a system service and greengrass is running as well but couldn't find the device on iotCore, and as checked in my greengrass.log, attached logs for the same. Also my nucleus and main.log files are empty. Is there any other way to debug such issue? config.yaml file ``` services: aws.greengrass.Nucleus: version: "2.5.6" configuration: awsRegion: "us-east-1" aws.greengrass.FleetProvisioningByClaim: configuration: rootPath: /greengrass/v2 awsRegion: "us-east-1" iotDataEndpoint: "$iotDataEndpoint" // replaced with endpoint as retrieved from aws iot describe-endpoint --endpoint-type iot:Data-ATS iotCredentialEndpoint: "$iotCredentialEndpoint" // replaced with endpoint as retrieved from aws iot describe-endpoint --endpoint-type iot:CredentialProvider iotRoleAlias: "GreengrassV2TokenExchangeRoleAlias" provisioningTemplate: "$provisioningTemplate" // Value as copied from `Fleet provisioning templates` in IOT Core claimCertificatePath: "/greengrass/v2/claim-certs/claim.pem.crt" // copied from certificatePem as mentioned in https://tiny.amazon.com/n4qhu1jm/docsawsamaziotlateapirAPI_ claimCertificatePrivateKeyPath: "/greengrass/v2/claim-certs/claim.private.pem.key" // copied from keyPair.privateKey as mentioned in https://tiny.amazon.com/n4qhu1jm/docsawsamaziotlateapirAPI_ rootCaPath: "/greengrass/v2/AmazonRootCA1.pem" // verified they are present templateParameters: ThingName: "$thingName" // replaced $thingName with my thing name ThingGroupName: "$thingGroupName" // replaced $thingName with my group name ``` Java command: ``` sudo -E java -Droot="/greengrass/v2" -Dlog.store=FILE \ -jar /target/GreengrassInstaller/lib/Greengrass.jar \ --trusted-plugin /target/GreengrassInstaller/aws.greengrass.FleetProvisioningByClaim.jar \ --init-config /target/GreengrassInstaller/config.yaml \ --component-default-user ggc_user:ggc_group \ --setup-system-service true ``` greengrass.log: ``` 2022-07-03T14:33:00.260Z [ERROR] (pool-2-thread-1) com.aws.greengrass.FleetProvisioningByClaimPlugin: Exception encountered while getting device identity information. {} software.amazon.awssdk.crt.CrtRuntimeException: aws_tls_ctx_options_init_client_mtls_from_path failed (aws_last_error: AWS_ERROR_INVALID_ARGUMENT(34), An invalid argument was passed to a function.) AWS_ERROR_INVALID_ARGUMENT(34) at software.amazon.awssdk.crt.io.TlsContextOptions.tlsContextOptionsNew(Native Method) at software.amazon.awssdk.crt.io.TlsContextOptions.getNativeHandle(TlsContextOptions.java:108) at software.amazon.awssdk.crt.io.TlsContext.<init>(TlsContext.java:24) at software.amazon.awssdk.crt.io.ClientTlsContext.<init>(ClientTlsContext.java:26) at software.amazon.awssdk.iot.AwsIotMqttConnectionBuilder.build(AwsIotMqttConnectionBuilder.java:619) at com.aws.greengrass.MqttConnectionHelper.getMqttConnection(MqttConnectionHelper.java:66) at com.aws.greengrass.FleetProvisioningByClaimPlugin.updateIdentityConfiguration(FleetProvisioningByClaimPlugin.java:142) at com.aws.greengrass.lifecyclemanager.KernelLifecycle.lambda$executeProvisioningPlugin$1(KernelLifecycle.java:199) at com.aws.greengrass.util.RetryUtils.runWithRetry(RetryUtils.java:50) at com.aws.greengrass.lifecyclemanager.KernelLifecycle.lambda$executeProvisioningPlugin$2(KernelLifecycle.java:198) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:829) ```
1
answers
0
votes
19
views
asked 2 days ago
  • 1
  • 90 / page