By using AWS re:Post, you agree to the Terms of Use
/Compute/Questions/
Questions in Compute
Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Clock gating / using "highly discouraged" constraint

I have a wide and deep shift register in my design that I want to control using a gated clock. Its size and distribution throughout the entire device make a clock enable an inferior option from a routability and resource usage perspective. I tried the below and believe my design is working but have misgivings about using a constraint that is "highly discouraged" without truly understanding what I'm doing and if there's a recommended alternative. Guidance on the suitability of my approach / why the constraint is "highly discouraged" would be appreciated. I sequentially tried: 1) Inferring from Verilog RTL code in various ways. Nothing looked good. 2) Instantiating a BUFGCE primitive alone. ``` CRITICAL WARNING: [DRC HDPR-59] Clock Net Rule Violation: Illegal clock load 'WRAPPER_INST/CL/BUFGCE_unknown' found on PR boundary clock net 'WRAPPER_INST/SH/kernel_clks_i/clkwiz_sys_clk/inst/CLK_CORE_DRP_I/clk_inst/clk_out1'. Boundary clock nets are not fully supported to drive loads of type BUFGCE inside a reconfigurable region. This type of connection may cause downstream tool issues. The recommended solution is to add an MMCM as the clock load driving the original BUFGCE load. ``` 3) Instantiating a MMCME4_BASE primitive before a BUFGCE primitive. ``` Phase 1.2 IO Placement/ Clock Placement/ Build Placer Device ERROR: [Place 30-718] Sub-optimal placement for an MMCM/PLL-BUFGCE-MMCM/PLL cascade pair.If this sub optimal condition is acceptable for this design, you may use the CLOCK_DEDICATED_ROUTE constraint in the .xdc file to demote this message to a WARNING. However, the use of this override is highly discouraged. These examples can be used directly in the .xdc file to override this clock rule. set_property CLOCK_DEDICATED_ROUTE ANY_CMT_COLUMN [get_nets WRAPPER_INST/SH/kernel_clks_i/clkwiz_sys_clk/inst/CLK_CORE_DRP_I/clk_inst/clk_out1] WRAPPER_INST/SH/kernel_clks_i/clkwiz_sys_clk/inst/CLK_CORE_DRP_I/clk_inst/clkout1_buf (BUFGCE.O) is locked to BUFGCE_X1Y181 (in SLR 1) The loads are distributed to 1 user pblock constraints. In addition, there are 0 loads not in user pblock constraints. Displaying the first 1 loads for pblock constraint 1 WRAPPER_INST/CL/MMCME4_BASE_inst (MMCME4_ADV.CLKIN1) is provisionally placed by clockplacer on MMCM_X0Y5 (in SLR 1) The above error could possibly be related to other connected instances. Following is a list of all the related clock rules and their respective instances. Clock Rule: rule_bufgce_bufg_conflict Status: PASS Rule Description: Only one of the 2 available sites (BUFGCE or BUFGCE_DIV/BUFGCTRL) in a pair can be used at the same time WRAPPER_INST/CL/BUFGCE_unknown (BUFGCE.O) is provisionally placed by clockplacer on BUFGCE_X0Y120 (in SLR 1) Clock Rule: rule_mmcm_bufg Status: PASS Rule Description: A MMCM driving a BUFG must be placed in the same clock region of the device as the BUFG WRAPPER_INST/CL/MMCME4_BASE_inst (MMCME4_ADV.CLKOUT0) is provisionally placed by clockplacer on MMCM_X0Y5 (in SLR 1) WRAPPER_INST/CL/BUFGCE_unknown (BUFGCE.I) is provisionally placed by clockplacer on BUFGCE_X0Y120 (in SLR 1) Clock Rule: rule_bufgce_bufg_conflict Status: PASS Rule Description: Only one of the 2 available sites (BUFGCE or BUFGCE_DIV/BUFGCTRL) in a pair can be used at the same time WRAPPER_INST/SH/kernel_clks_i/clkwiz_sys_clk/inst/CLK_CORE_DRP_I/clk_inst/clkout1_buf (BUFGCE.O) is locked to BUFGCE_X1Y181 (in SLR 1) Clock Rule: rule_mmcm_bufg Status: PASS Rule Description: A MMCM driving a BUFG must be placed in the same clock region of the device as the BUFG WRAPPER_INST/SH/kernel_clks_i/clkwiz_sys_clk/inst/CLK_CORE_DRP_I/clk_inst/mmcme3_adv_inst (MMCME4_ADV.CLKOUT0) is locked to MMCM_X1Y7 (in SLR 1) WRAPPER_INST/SH/kernel_clks_i/clkwiz_sys_clk/inst/CLK_CORE_DRP_I/clk_inst/clkout1_buf (BUFGCE.I) is locked to BUFGCE_X1Y181 (in SLR 1) Clock Rule: rule_bufgce_bufg_conflict Status: PASS Rule Description: Only one of the 2 available sites (BUFGCE or BUFGCE_DIV/BUFGCTRL) in a pair can be used at the same time WRAPPER_INST/SH/kernel_clks_i/clkwiz_sys_clk/inst/CLK_CORE_DRP_I/clk_inst/clkout2_buf (BUFGCE.O) is locked to BUFGCE_X1Y183 (in SLR 1) Clock Rule: rule_bufgce_bufg_conflict Status: PASS Rule Description: Only one of the 2 available sites (BUFGCE or BUFGCE_DIV/BUFGCTRL) in a pair can be used at the same time WRAPPER_INST/SH/kernel_clks_i/clkwiz_sys_clk/inst/CLK_CORE_DRP_I/clk_inst/clkout3_buf (BUFGCE.O) is locked to BUFGCE_X1Y172 (in SLR 1) Clock Rule: rule_bufgce_bufg_conflict Status: PASS Rule Description: Only one of the 2 available sites (BUFGCE or BUFGCE_DIV/BUFGCTRL) in a pair can be used at the same time WRAPPER_INST/SH/kernel_clks_i/clkwiz_sys_clk/inst/CLK_CORE_DRP_I/clk_inst/clkout4_buf (BUFGCE.O) is locked to BUFGCE_X1Y171 (in SLR 1) Resolution: The MMCM/PLL-BUFGCE-MMCM/PLL cascade pair can use the dedicated path between them if they are placed in vertically adjacent clock regions and in the same column (LEFT/RIGHT) of the device. ``` 4) Instantiating a MMCME4_BASE before a BUFGCE primitive and constraining the design with `set_property CLOCK_DEDICATED_ROUTE ANY_CMT_COLUMN [get_nets WRAPPER_INST/SH/kernel_clks_i/clkwiz_sys_clk/inst/CLK_CORE_DRP_I/clk_inst/clk_out1]` in cl_pnr_user.xdc. This met with success.
0
answers
0
votes
2
views
asked 10 hours ago

Amazon Lex - Send empty message prompt from AWS Lambda

Hello, I'm using Amazon Lex V2, with the code hooks in AWS Lambda. In some particular cases, I'm trying not to send any message back to the user after closing the dialog, but for some reason, the documentation (https://docs.aws.amazon.com/lexv2/latest/dg/lambda.html) doesn't seem to be in-line with what is happening in Lex (I'm receiving an error, find the error message in section **Lambda response which doesn't work**). For a dialogAction of type Close with an intent of state Fulfilled, it requires me (otherwise I'm receiving an error) to fill in the messages part of the JSON response, even though the documentation mentions it is only necessary for an ElicitIntent dialog action. (See below section **Lambda response which doesn't work**) Sending in a Delegate dialogAction and not adding any closing messages in the Lex intent (in the Lex Console) does work, but this seems like a hacky solution for something that should already work with the Close dialogAction. (See below section **Lambda response which works if there is no closing message in Lex**) You may ask yourself why would I want that instead of sending in a default response message (e.g. "Please wait" or "One moment"), the reason is because Lex would be integrated in a 3rd party application, and there is some business logic with prompts already integrated there saying the same thing. Am I missing something with the responses format in Lambda? **Lambda response which doesn't work**: Response: ``` { sessionState: { dialogAction: { type: "Close" }, intent: { name: "TestIntent", slots: event.sessionState.intent.slots, state: "Fulfilled" } } }; ``` Error: `Invalid Lambda Response: The Lambda code hook didn't contain a message. The intent is not configured with a follow up prompt, a conclusion statement, or a fulfillment success response.` **Lambda response which works if there is no closing message in Lex**: Response: ``` { sessionState: { dialogAction: { type: "Delegate" } } }; ``` Result: No error, only the message text for which the intent was fulfilled (only in the Lex console chat), but it works fine in the 3rd party application. Thank you,
0
answers
0
votes
10
views
asked 11 hours ago

Insufficient privileges for accessing data in S3 when running a lambda function to create a Personalize dataset import job

I am trying to create a lambda function to automate the creation of a dataset import job in Personalize. I followed this guide: https://docs.aws.amazon.com/personalize/latest/dg/granting-personalize-s3-access.html#attaching-s3-policy-to-role and kept getting the same error saying "Insufficient privileges for accessing data in S3". Here are the steps I took: 1. Add AmazonPersonalizeFullAccess to my IAM user 2. Create a personalizeLambda role with 4 policies: - AmazonS3FullAccess - CloudWatchLogsFullAccess - AmazonPersonalizeFullAccess - AWSLambdaBasicExecutionRole This didn't work with the error above so I added this policy: - PersonalizeS3BucketAccessPolicyCustom: { "Version": "2012-10-17" "Id": "PersonalizeS3BucketAccessPolicyCustom", "Statement": [ { "Sid": "PersonalizeS3BucketAccessPolicy", "Effect": "Allow", "Action": [ "s3:*" ], "Resource": [ "arn:aws:s3:::<bucket-name>", "arn:aws:s3:::<bucket-name>/*" ] }, { "Effect": "Allow", "Action": "sts:AssumeRole", "Resource": [ "arn:aws:s3:::<bucket-name>", "arn:aws:s3:::<bucket-name>/*" ] }, { "Effect": "Allow", "Action": "lambda:InvokeFunction", "Resource": [ "arn:aws:lambda:<region>:<id>:function:create-personalize-model*", "arn:aws:lambda:<region>:<id>:function:create-personalize-dataset-import-job" ] } ] } 3. Create a bucket policy in the S3 bucket that has the dataset files: { "Version": "2012-10-17", "Id": "PersonalizeS3BucketAccessPolicy", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::<id>:role/personalizeLambda", "Service": "personalize.amazonaws.com" }, "Action": "s3:*", "Resource": "arn:aws:s3:::jfna-personalize" }, { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::<id>:role/personalizeLambda", "Service": "personalize.amazonaws.com" }, "Action": "s3:*", "Resource": "arn:aws:s3:::jfna-personalize/*" } ] } I still get the same error no matter how many times I've followed the guide. I would really appreciate it if someone could help figure out what I'm missing or did wrong.
2
answers
0
votes
15
views
asked 14 hours ago

Not able to convert Hugging Face fine-tuned BERT model into AWS Neuron

Hi Team, I have a fine-tuned BERT model which was trained using following libraries. torch == 1.8.1+cu111 transformers == 4.19.4 And not able to convert that fine-tuned BERT model into AWS neuron and getting following compilation errors. Could you please help me to resolve this issue? **Note:** Trying to compile BERT model on SageMaker notebook instance and with "conda_python3" conda environment. **Installation:** #### Set Pip repository to point to the Neuron repository !pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com #### Install Neuron PyTorch - Note: Tried both options below. "#!pip install torch-neuron==1.8.1.* neuron-cc[tensorflow] "protobuf<4" torchvision sagemaker>=2.79.0 transformers==4.17.0 --upgrade" !pip install --upgrade torch-neuron neuron-cc[tensorflow] "protobuf<4" torchvision --------------------------------------------------------------------------------------------------------------------------------------------------- **Model compilation:** ``` import os import tensorflow # to workaround a protobuf version conflict issue import torch import torch.neuron from transformers import AutoTokenizer, AutoModelForSequenceClassification model_path = 'model/' # Model artifacts are stored in 'model/' directory # load tokenizer and model tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForSequenceClassification.from_pretrained(model_path, torchscript=True) # create dummy input for max length 128 dummy_input = "dummy input which will be padded later" max_length = 128 embeddings = tokenizer(dummy_input, max_length=max_length, padding="max_length", truncation=True, return_tensors="pt") neuron_inputs = tuple(embeddings.values()) # compile model with torch.neuron.trace and update config model_neuron = torch.neuron.trace(model, neuron_inputs) model.config.update({"traced_sequence_length": max_length}) # save tokenizer, neuron model and config for later use save_dir="tmpd" os.makedirs("tmpd",exist_ok=True) model_neuron.save(os.path.join(save_dir,"neuron_model.pt")) tokenizer.save_pretrained(save_dir) model.config.save_pretrained(save_dir) ``` --------------------------------------------------------------------------------------------------------------------------------------------------- **Model artifacts:** We have got this model artifacts from multi-label topic classification model. config.json model.tar.gz pytorch_model.bin special_tokens_map.json tokenizer_config.json tokenizer.json --------------------------------------------------------------------------------------------------------------------------------------------------- **Error logs:** ``` INFO:Neuron:There are 3 ops of 1 different types in the TorchScript that are not compiled by neuron-cc: aten::embedding, (For more information see https://github.com/aws/aws-neuron-sdk/blob/master/release-notes/neuron-cc-ops/neuron-cc-ops-pytorch.md) INFO:Neuron:Number of arithmetic operators (pre-compilation) before = 565, fused = 548, percent fused = 96.99% INFO:Neuron:Number of neuron graph operations 1601 did not match traced graph 1323 - using heuristic matching of hierarchical information WARNING:tensorflow:From /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages/torch_neuron/ops/aten.py:2022: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.where in 2.0, which has the same broadcast rule as np.where INFO:Neuron:Compiling function _NeuronGraph$698 with neuron-cc INFO:Neuron:Compiling with command line: '/home/ec2-user/anaconda3/envs/python3/bin/neuron-cc compile /tmp/tmpv4gg13ze/graph_def.pb --framework TENSORFLOW --pipeline compile SaveTemps --output /tmp/tmpv4gg13ze/graph_def.neff --io-config {"inputs": {"0:0": [[1, 128, 768], "float32"], "1:0": [[1, 1, 1, 128], "float32"]}, "outputs": ["Linear_5/aten_linear/Add:0"]} --verbose 35' INFO:Neuron:Compile command returned: -9 WARNING:Neuron:torch.neuron.trace failed on _NeuronGraph$698; falling back to native python function call ERROR:Neuron:neuron-cc failed with the following command line call: /home/ec2-user/anaconda3/envs/python3/bin/neuron-cc compile /tmp/tmpv4gg13ze/graph_def.pb --framework TENSORFLOW --pipeline compile SaveTemps --output /tmp/tmpv4gg13ze/graph_def.neff --io-config '{"inputs": {"0:0": [[1, 128, 768], "float32"], "1:0": [[1, 1, 1, 128], "float32"]}, "outputs": ["Linear_5/aten_linear/Add:0"]}' --verbose 35 Traceback (most recent call last): File "/home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages/torch_neuron/convert.py", line 382, in op_converter item, inputs, compiler_workdir=sg_workdir, **kwargs) File "/home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages/torch_neuron/decorators.py", line 220, in trace 'neuron-cc failed with the following command line call:\n{}'.format(command)) subprocess.SubprocessError: neuron-cc failed with the following command line call: /home/ec2-user/anaconda3/envs/python3/bin/neuron-cc compile /tmp/tmpv4gg13ze/graph_def.pb --framework TENSORFLOW --pipeline compile SaveTemps --output /tmp/tmpv4gg13ze/graph_def.neff --io-config '{"inputs": {"0:0": [[1, 128, 768], "float32"], "1:0": [[1, 1, 1, 128], "float32"]}, "outputs": ["Linear_5/aten_linear/Add:0"]}' --verbose 35 INFO:Neuron:Number of arithmetic operators (post-compilation) before = 565, compiled = 0, percent compiled = 0.0% INFO:Neuron:The neuron partitioner created 1 sub-graphs INFO:Neuron:Neuron successfully compiled 0 sub-graphs, Total fused subgraphs = 1, Percent of model sub-graphs successfully compiled = 0.0% INFO:Neuron:Compiled these operators (and operator counts) to Neuron: INFO:Neuron:Not compiled operators (and operator counts) to Neuron: INFO:Neuron: => aten::Int: 97 [supported] INFO:Neuron: => aten::add: 39 [supported] INFO:Neuron: => aten::contiguous: 12 [supported] INFO:Neuron: => aten::div: 12 [supported] INFO:Neuron: => aten::dropout: 38 [supported] INFO:Neuron: => aten::embedding: 3 [not supported] INFO:Neuron: => aten::gelu: 12 [supported] INFO:Neuron: => aten::layer_norm: 25 [supported] INFO:Neuron: => aten::linear: 74 [supported] INFO:Neuron: => aten::matmul: 24 [supported] INFO:Neuron: => aten::mul: 1 [supported] INFO:Neuron: => aten::permute: 48 [supported] INFO:Neuron: => aten::rsub: 1 [supported] INFO:Neuron: => aten::select: 1 [supported] INFO:Neuron: => aten::size: 97 [supported] INFO:Neuron: => aten::slice: 5 [supported] INFO:Neuron: => aten::softmax: 12 [supported] INFO:Neuron: => aten::tanh: 1 [supported] INFO:Neuron: => aten::to: 1 [supported] INFO:Neuron: => aten::transpose: 12 [supported] INFO:Neuron: => aten::unsqueeze: 2 [supported] INFO:Neuron: => aten::view: 48 [supported] --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-1-97bba321d013> in <module> 18 19 # compile model with torch.neuron.trace and update config ---> 20 model_neuron = torch.neuron.trace(model, neuron_inputs) 21 model.config.update({"traced_sequence_length": max_length}) 22 ~/anaconda3/envs/python3/lib/python3.6/site-packages/torch_neuron/convert.py in trace(func, example_inputs, fallback, op_whitelist, minimum_segment_size, subgraph_builder_function, subgraph_inputs_pruning, skip_compiler, debug_must_trace, allow_no_ops_on_neuron, compiler_workdir, dynamic_batch_size, compiler_timeout, _neuron_trace, compiler_args, optimizations, verbose, **kwargs) 182 logger.debug("skip_inference_context - trace with fallback at {}".format(get_file_and_line())) 183 neuron_graph = cu.compile_fused_operators(neuron_graph, **compile_kwargs) --> 184 cu.stats_post_compiler(neuron_graph) 185 186 # Wrap the compiled version of the model in a script module. Note that this is ~/anaconda3/envs/python3/lib/python3.6/site-packages/torch_neuron/convert.py in stats_post_compiler(self, neuron_graph) 491 if succesful_compilations == 0 and not self.allow_no_ops_on_neuron: 492 raise RuntimeError( --> 493 "No operations were successfully partitioned and compiled to neuron for this model - aborting trace!") 494 495 if percent_operations_compiled < 50.0: RuntimeError: No operations were successfully partitioned and compiled to neuron for this model - aborting trace! ``` --------------------------------------------------------------------------------------------------------------------------------------------------- Thanks a lot.
0
answers
0
votes
5
views
asked 18 hours ago

How to get traffic from a public API Gateway to a private one?

I would like to use [private API Gateways](https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-api-endpoint-types.html#api-gateway-api-endpoint-types-private) to organise Lambda functions into microservices, while keeping them invisible from the public internet. I would then like to expose specific calls using a public API Gateway. How do I get traffic from my public API Gateway to a private API Gateway? **What I've looked at so far** In the past, for **container-based resources**, I've used the following pattern: *Internet -> API Gateway -> VPC Link -> VPC[NLB -> ECS]* However, I can't find an equivalent bridge to get specific traffic to a private API Gateway. I.e. *Internet -> API Gateway -> ? -> Private Gateway -> Lambda* My instinct tells me that a network-based solution should exist (equivalent to VPC Link), but so far the only suggestions I've had involve: - Solving using compute ( *Internet -> API Gateway -> VPC[Lambda proxy] -> Private Gateway -> Lambda* ) - Solving using load balancers ( *Internet -> API Gateway -> VPC Link -> VPC[NLB -> ALB] -> Private Gateway -> Lambda* ) Both of these approaches strike me as using the wrong (and expensive!) tools for the job. I.e. Compute where no computation is required and (two!!) load balancers where no load balancing is required (as Lambda effectively self-loadbalances). **Alternative solutions** Perhaps there's a better way (other than a private API Gateway) to organise collections of serverless resources into microservices. I'm attempting to use them to present a like-for-like interface that my container-based microservices would have. E.g. Documented (Open API spec), authentication, traffic monitoring, etc. If using private API Gateways to wrap internal resources into microservices is actually a misuse, and there's a better way to do it, I'm happy to hear it.
1
answers
0
votes
33
views
asked 2 days ago

I need to attach IAM role to my EC2 instance.

PentestEnvironment-Deployment-Role/octopus is not authorized to perform: iam:PassRole on resource. I have CF template which create Ec2 and Iam role for my env and all this env I create from not-root account. Iam Role for this account it's only main part: { "Sid": "IAM1", "Effect": "Allow", "Action": [ "iam:PassRole" ], "Resource": [ "arn:aws:iam::*:role/Pentest-EC2-Role" ], "Condition": { "StringEquals": { "iam:PassedToService": "ec2.amazonaws.com" }, "StringLike": { "iam:AssociatedResourceARN": [ "arn:aws:ec2:us-west-2:*:instance/*" ] } } }, { "Sid": "IAM2", "Effect": "Allow", "Action": [ "iam:GetRole", "iam:CreateRole", "iam:DeleteRole", "iam:DetachRolePolicy", "iam:AttachRolePolicy", "iam:PutRolePolicy", "iam:GetRolePolicy" ], "Resource": [ "arn:aws:iam::*:role/Pentest-EC2-Role" ] }, { "Sid": "IAM3", "Effect": "Allow", "Action": [ "iam:ListRoles" ], "Resource": [ "*" ] }, { "Sid": "IAM4", "Effect": "Allow", "Action": [ "iam:GetPolicy", "iam:CreatePolicy", "iam:ListPolicyVersions", "iam:CreatePolicyVersion", "iam:DeletePolicy", "iam:DeletePolicyVersion" ], "Resource": [ "arn:aws:iam::*:policy/Pentest-AWS-resources-Access" ] }, { "Sid": "IAM5", "Effect": "Allow", "Action": [ "iam:CreateInstanceProfile", "iam:DeleteInstanceProfile", "iam:RemoveRoleFromInstanceProfile", "iam:AddRoleToInstanceProfile" ], "Resource": "arn:aws:iam::*:instance-profile/Pentest-Instance-Profile" }, { "Sid": "EC2InstanceProfile", "Effect": "Allow", "Action": [ "ec2:DisassociateIamInstanceProfile", "ec2:AssociateIamInstanceProfile", "ec2:ReplaceIamInstanceProfileAssociation" ], "Resource": "arn:aws:ec2:*:*:instance/*" } ] } Why do I have this error?
1
answers
0
votes
26
views
asked 2 days ago

In CDK, how do you enable `associatePublicIpAddress` in an AutoScalingGroup that has a `mixedInstancesPolicy`?

I'm using AWS CDK and am trying to enable the associatePublicIpAddress property for an AutoScalingGroup that's using a launch template. My first attempt was to just set `associatePublicIpAddress: true`, but I get this error (https://github.com/aws/aws-cdk/blob/master/packages/%40aws-cdk/aws-autoscaling/lib/auto-scaling-group.ts#L1526-L1528) ```typescript // first attempt new asg.AutoScalingGroup(this, 'ASG', { associatePublicIpAddress: true, // here minCapacity: 1, maxCapacity: 1, vpc, vpcSubnets: { subnetType: SubnetType.PUBLIC, onePerAz: true, availabilityZones: [availabilityZone], }, mixedInstancesPolicy: { instancesDistribution: { spotMaxPrice: '1.00', onDemandPercentageAboveBaseCapacity: 0, }, launchTemplate: new LaunchTemplate(this, 'LaunchTemplate', { securityGroup: this._securityGroup, role, instanceType machineImage, userData: UserData.forLinux(), }), launchTemplateOverrides: [ { instanceType: InstanceType.of( InstanceClass.T4G, InstanceSize.NANO ), }, ], }, keyName, }) ``` ```typescript // I hit this error from the CDK if (props.associatePublicIpAddress) { throw new Error('Setting \'associatePublicIpAddress\' must not be set when \'launchTemplate\' or \'mixedInstancesPolicy\' is set'); } ``` My second attempt was to not set `associatePublicIpAddress` and see if it gets set automatically because the AutoScalingGroup is in a public availablity zone with an internet gateway. However, it still doesn't provision a public ip address. Has anyone been able to create an autoscaling group with a mix instance policy and an associated public ip?
1
answers
0
votes
13
views
asked 2 days ago

S3 Batch Operations job fails due to missing VersionId

I created a POC Batch Operation Job that I want to Invoke Lambda function. This Lambda will get the file, do transformation, copy transformed file into new bucket, and delete file from old bucket upon completion. The Batch job fails before invoking my lambda and report csv shows following errors: ``` eceeecom-5732-poc-old,emailstore/0079d564-dccc-4066-a42d-8d9113097d02,,failed,400,InvalidRequest,Task failed due to missing VersionId eceeecom-5732-poc-old,emailstore/00975dec-f64b-4932-a1e8-9ec1284f76bb,,failed,400,InvalidRequest,Task failed due to missing VersionId ``` My manifest.json ``` { "sourceBucket" : "poc-old", "destinationBucket" : "arn:aws:s3:::poc-new", "version" : "2016-11-30", "creationTimestamp" : "1656633600000", "fileFormat" : "CSV", "fileSchema" : "Bucket, Key, VersionId, IsLatest, IsDeleteMarker, Size, LastModifiedDate, ETag, StorageClass, IsMultipartUploaded, ReplicationStatus, EncryptionStatus, ObjectLockRetainUntilDate, ObjectLockMode, ObjectLockLegalHoldStatus, IntelligentTieringAccessTier, BucketKeyStatus, ChecksumAlgorithm", "files" : [ { "key" : "emailstore/eceeecom-5732-poc-old/emailstore-inventory-config/data/b84c7842-bc58-40b9-afb0-622060853c8a.csv.gz", "size" : 623, "MD5checksum" : "XYZ" } ] } ``` When I unzip above csv.gz file, I observe: ``` "eceeecom-5732-poc-old","emailstore/0079d564-dccc-4066-a42d-8d9113097d02","","true","false","150223","2022-06-30T21:22:06.000Z","60d3815bd0e85e3b689139b6938362b4","STANDARD","false","","SSE-S3","","","","","DISABLED","" "eceeecom-5732-poc-old","emailstore/00975dec-f64b-4932-a1e8-9ec1284f76bb","","true","false","46054","2022-06-30T21:22:06.000Z","214c15f193c58defbbf938318e103aed","STANDARD","false","","SSE-S3","","","","","DISABLED","" ``` Clearly there is no **Version Id** and that is a culprit, but how can I make Inventory configuration not ask for Version Id to be added to manifest? When I was reading about Inventory list, it said that Version ID field is not included if the list is only for the current version of objects: https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage-inventory.html
1
answers
0
votes
24
views
asked 4 days ago

Get current instance features from within said instance

I've been working on some code that would benefit from some level of awareness about the platform on which its running. When it runs on bare metal, several options are available (lshw, hwloc and so on). In EC2 instances, this task is not so straight forward, as they run on virtualization (excluding bare metal instances, evidently). Running 'lshw' for instance, lists the hardware, that not necessarily corresponds with available resources. As an example, running lshw on a t2.micro instance, which has 1 default core available, gives the actual model of the CPU on which it is running, a Intel Xeon with 12 cores. I understand that I am able to fetch [instance metadata](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html), find which instance type the code is running on and use AWS CLI and/or EC2 API to get [the description of the instance](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/ec2-api.pdf). The issue with that workaround is that it presupposes that the current instance has either the AWS CLI configured with proper credentials or that the user credentials are available as environment variables to the system, which may or may not be true. I've been looking for a more general solution, that could work, at least, on the most popular Linux distros, such as querying the system about actually available resources (cpus cores, threads, memory, cache and accelerators) but have so far failed to find a suitable solution. Is this possible? Or in this circumstances such query is not a possibility?
1
answers
0
votes
19
views
asked 5 days ago

eb platform create fails with Ruby SDK deprecated error

When trying to create a custom ElasticBeanstalk platform that uses Python3.10.5, I keep running across this error: ``` [2022-07-01T05:50:06.466Z] INFO [5419] - [CMD-PackerBuild/PackerBuild/PackerBuildHook/build.rb] : Activity execution failed, because: Version 2 of the Ruby SDK will enter maintenance mode as of November 20, 2020. To continue receiving service updates and new features, please upgrade to Version 3. More information can be found here: https://aws.amazon.com/blogs/developer/deprecation-schedule-for-aws-sdk-for-ruby-v2/ 'packer build' failed, the build log has been saved to '/var/log/packer-builder/Python3.10_Ubuntu:1.0.8-builder.log' (ElasticBeanstalk::ExternalInvocationError) caused by: Version 2 of the Ruby SDK will enter maintenance mode as of November 20, 2020. To continue receiving service updates and new features, please upgrade to Version 3. More information can be found here: https://aws.amazon.com/blogs/developer/deprecation-schedule-for-aws-sdk-for-ruby-v2/ 'packer build' failed, the build log has been saved to '/var/log/packer-builder/Python3.10_Ubuntu:1.0.8-builder.log' (Executor::NonZeroExitStatus) ``` I'm not sure how to get around it, as none of my actual code for this uses ruby at all. I have tried to SSH into the packer build box and run `gem install aws-sdk` to get the latest version, however the problem above still persists. I'm really unsure of what to do at this point. Any advice?
2
answers
0
votes
35
views
asked 5 days ago

Status 2/2 failed from amazon side.

Hi team, One of our servers was down yesterday with a 2/2 status failed caused by Amazone. Due to which we are also unable to log in, I have tried multiple troubleshooting steps, such as starting, stopping, rebooting, enabling details monitoring, and collecting system logs, but it appears that we are unable to recover the instance at this time. I have also tried to increase server resources for a time being, but this did not solve the problem. Please help me to recover this issue also please follow the below logs for more details ( Instance type: m5.4xlrage, with 1000GB of gp2) [ 0.000000] Linux version 5.8.0-1038-aws (buildd@lcy01-amd64-016) (gcc (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0, GNU ld (GNU Binutils for Ubuntu) 2.34) #40~20.04.1-Ubuntu SMP Thu Jun 17 13:25:28 UTC 2021 (Ubuntu 5.8.0-1038.40~20.04.1-aws 5.8.18) [ 0.000000] Command line: BOOT_IMAGE=/boot/vmlinuz-5.8.0-1038-aws root=PARTUUID=5198cbc0-01 ro console=tty1 console=ttyS0 nvme_core.io_timeout=4294967295 panic=-1 [ 0.000000] KERNEL supported cpus: [ 0.000000] Intel GenuineIntel [ 0.000000] AMD AuthenticAMD [ 0.000000] Hygon HygonGenuine [ 0.000000] Centaur CentaurHauls [ 0.000000] zhaoxin Shanghai [ 0.000000] x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' [ 0.000000] x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' [ 0.000000] x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' [ 0.000000] x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 [ 0.000000] x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. [ 0.000000] BIOS-provided physical RAM map: [ 0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable [ 0.000000] BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved [ 0.000000] BIOS-e820: [mem 0x0000000000100000-0x00000000bffe8fff] usable [ 0.000000] BIOS-e820: [mem 0x00000000bffe9000-0x00000000bfffffff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved [ 0.000000] BIOS-e820: [mem 0x0000000100000000-0x0000000ff7ffffff] usable [ 0.000000] BIOS-e820: [mem 0x0000000ff8000000-0x000000103fffffff] reserved [ 0.000000] NX (Execute Disable) protection: active [ 0.000000] SMBIOS 2.7 present. [ 0.000000] DMI: Amazon EC2 m5a.4xlarge/, BIOS 1.0 10/16/2017 [ 0.000000] Hypervisor detected: KVM [ 0.000000] kvm-clock: Using msrs 4b564d01 and 4b564d00 [ 0.000000] kvm-clock: cpu 0, msr 124a01001, primary cpu clock [ 0.000000] kvm-clock: using sched offset of 11809202197 cycles [ 0.000003] clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns [ 0.000005] tsc: Detected 2199.474 MHz processor [ 0.000602] last_pfn = 0xff8000 max_arch_pfn = 0x400000000 [ 0.000709] x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT [ 0.000736] last_pfn = 0xbffe9 max_arch_pfn = 0x400000000 [ 0.006651] check: Scanning 1 areas for low memory corruption [ 0.006703] Using GB pages for direct mapping [ 0.006927] RAMDISK: [mem 0x37715000-0x37b81fff] [ 0.006938] ACPI: Early table checksum verification disabled [ 0.006945] ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) [ 0.006952] ACPI: RSDT 0x00000000BFFEDCB0 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) [ 0.006958] ACPI: FACP 0x00000000BFFEFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) [ 0.006964] ACPI: DSDT 0x00000000BFFEDD00 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) [ 0.006968] ACPI: FACS 0x00000000BFFEFF40 000040 [ 0.006971] ACPI: SSDT 0x00000000BFFEF170 000DC8 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) [ 0.006975] ACPI: APIC 0x00000000BFFEF010 0000E6 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) [ 0.006978] ACPI: SRAT 0x00000000BFFEEE90 000180 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) [ 0.006981] ACPI: SLIT 0x00000000BFFEEE20 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) [ 0.006985] ACPI: WAET 0x00000000BFFEEDF0 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) [ 0.006991] ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) [ 0.006994] ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) [ 0.006997] ACPI: Reserving FACP table memory at [mem 0xbffeff80-0xbffefff3] [ 0.006999] ACPI: Reserving DSDT table memory at [mem 0xbffedd00-0xbffeede8] [ 0.007000] ACPI: Reserving FACS table memory at [mem 0xbffeff40-0xbffeff7f] [ 0.007001] ACPI: Reserving SSDT table memory at [mem 0xbffef170-0xbffeff37] [ 0.007002] ACPI: Reserving APIC table memory at [mem 0xbffef010-0xbffef0f5] [ 0.007003] ACPI: Reserving SRAT table memory at [mem 0xbffeee90-0xbffef00f] [ 0.007004] ACPI: Reserving SLIT table memory at [mem 0xbffeee20-0xbffeee8b] [ 0.007005] ACPI: Reserving WAET table memory at [mem 0xbffeedf0-0xbffeee17] [ 0.007007] ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] [ 0.007008] ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] [ 0.007080] SRAT: PXM 0 -> APIC 0x00 -> Node 0 [ 0.007082] SRAT: PXM 0 -> APIC 0x01 -> Node 0 [ 0.007083] SRAT: PXM 0 -> APIC 0x02 -> Node 0 [ 0.007084] SRAT: PXM 0 -> APIC 0x03 -> Node 0 [ 0.007085] SRAT: PXM 0 -> APIC 0x04 -> Node 0 [ 0.007086] SRAT: PXM 0 -> APIC 0x05 -> Node 0 [ 0.007087] SRAT: PXM 0 -> APIC 0x06 -> Node 0 [ 0.007088] SRAT: PXM 0 -> APIC 0x07 -> Node 0 [ 0.007088] SRAT: PXM 0 -> APIC 0x08 -> Node 0 [ 0.007089] SRAT: PXM 0 -> APIC 0x09 -> Node 0 [ 0.007090] SRAT: PXM 0 -> APIC 0x0a -> Node 0 [ 0.007091] SRAT: PXM 0 -> APIC 0x0b -> Node 0 [ 0.007092] SRAT: PXM 0 -> APIC 0x0c -> Node 0 [ 0.007093] SRAT: PXM 0 -> APIC 0x0d -> Node 0 [ 0.007094] SRAT: PXM 0 -> APIC 0x0e -> Node 0 [ 0.007095] SRAT: PXM 0 -> APIC 0x0f -> Node 0 [ 0.007098] ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0xbfffffff] [ 0.007099] ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x103fffffff] [ 0.007112] NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0xff7ffffff] -> [mem 0x00000000-0xff7ffffff] [ 0.007121] NODE_DATA(0) allocated [mem 0xff7fd5000-0xff7ffefff] [ 0.007503] Zone ranges: [ 0.007504] DMA [mem 0x0000000000001000-0x0000000000ffffff] [ 0.007505] DMA32 [mem 0x0000000001000000-0x00000000ffffffff] [ 0.007507] Normal [mem 0x0000000100000000-0x0000000ff7ffffff] [ 0.007508] Device empty [ 0.007509] Movable zone start for each node [ 0.007513] Early memory node ranges [ 0.007514] node 0: [mem 0x0000000000001000-0x000000000009efff] [ 0.007515] node 0: [mem 0x0000000000100000-0x00000000bffe8fff] [ 0.007516] node 0: [mem 0x0000000100000000-0x0000000ff7ffffff] [ 0.007522] Initmem setup node 0 [mem 0x0000000000001000-0x0000000ff7ffffff] [ 0.007827] DMA zone: 28770 pages in unavailable ranges [ 0.013325] DMA32 zone: 23 pages in unavailable ranges [ 0.128485] ACPI: PM-Timer IO Port: 0xb008 [ 0.128498] ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) [ 0.128538] IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 [ 0.128541] ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) [ 0.128543] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) [ 0.128545] ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) [ 0.128546] ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) [ 0.128551] Using ACPI (MADT) for SMP configuration information [ 0.128553] ACPI: HPET id: 0x8086a201 base: 0xfed00000 [ 0.128562] smpboot: Allowing 16 CPUs, 0 hotplug CPUs [ 0.128591] PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff] [ 0.128593] PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff] [ 0.128594] PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff] [ 0.128595] PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff] [ 0.128597] PM: hibernation: Registered nosave memory: [mem 0xbffe9000-0xbfffffff] [ 0.128598] PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xdfffffff] [ 0.128598] PM: hibernation: Registered nosave memory: [mem 0xe0000000-0xe03fffff] [ 0.128599] PM: hibernation: Registered nosave memory: [mem 0xe0400000-0xfffbffff] [ 0.128600] PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff] [ 0.128602] [mem 0xc0000000-0xdfffffff] available for PCI devices [ 0.128604] Booting paravirtualized kernel on KVM [ 0.128607] clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 7645519600211568 ns [ 0.128615] setup_percpu: NR_CPUS:8192 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 [ 0.129248] percpu: Embedded 56 pages/cpu s192512 r8192 d28672 u262144 [ 0.129287] setup async PF for cpu 0 [ 0.129294] kvm-stealtime: cpu 0, msr fb8c2e080 [ 0.129301] Built 1 zonelists, mobility grouping on. Total pages: 16224626 [ 0.129302] Policy zone: Normal [ 0.129304] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-5.8.0-1038-aws root=PARTUUID=5198cbc0-01 ro console=tty1 console=ttyS0 nvme_core.io_timeout=4294967295 panic=-1 [ 0.135405] Dentry cache hash table entries: 8388608 (order: 14, 67108864 bytes, linear) [ 0.138445] Inode-cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) [ 0.138515] mem auto-init: stack:off, heap alloc:on, heap free:off [ 0.267053] Memory: 64693096K/65928732K available (14339K kernel code, 2545K rwdata, 5476K rodata, 2648K init, 4904K bss, 1235636K reserved, 0K cma-reserved) [ 0.267061] random: get_random_u64 called from kmem_cache_open+0x2d/0x410 with crng_init=0 [ 0.267205] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 [ 0.267222] ftrace: allocating 46691 entries in 183 pages [ 0.284648] ftrace: allocated 183 pages with 6 groups [ 0.284772] rcu: Hierarchical RCU implementation. [ 0.284773] rcu: RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=16. [ 0.284775] Trampoline variant of Tasks RCU enabled. [ 0.284775] Rude variant of Tasks RCU enabled. [ 0.284776] Tracing variant of Tasks RCU enabled. [ 0.284777] rcu: RCU calculated value of scheduler-enlistment delay is 25 jiffies. [ 0.284778] rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 [ 0.287928] NR_IRQS: 524544, nr_irqs: 552, preallocated irqs: 16 [ 0.288408] random: crng done (trusting CPU's manufacturer) [ 0.433686] Console: colour VGA+ 80x25 [ 0.949504] printk: console [tty1] enabled [ 1.196291] printk: console [ttyS0] enabled [ 1.200429] ACPI: Core revision 20200528 [ 1.204793] clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns [ 1.213129] APIC: Switch to symmetric I/O mode setup [ 1.217629] Switched APIC routing to physical flat. [ 1.223344] ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 [ 1.228384] clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x1fb441f3908, max_idle_ns: 440795250092 ns [ 1.237533] Calibrating delay loop (skipped) preset value.. 4398.94 BogoMIPS (lpj=8797896) [ 1.241533] pid_max: default: 32768 minimum: 301 [ 1.245565] LSM: Security Framework initializing [ 1.249543] Yama: becoming mindful. [ 1.253557] AppArmor: AppArmor initialized [ 1.257659] Mount-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) [ 1.261614] Mountpoint-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) [ 1.266288] Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 512 [ 1.269534] Last level dTLB entries: 4KB 1536, 2MB 1536, 4MB 768, 1GB 0 [ 1.273534] Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization [ 1.277533] Spectre V2 : Mitigation: Full AMD retpoline [ 1.281532] Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch [ 1.285533] Speculative Store Bypass: Vulnerable [ 1.289807] Freeing SMP alternatives memory: 40K [ 1.406501] smpboot: CPU0: AMD EPYC 7571 (family: 0x17, model: 0x1, stepping: 0x2) [ 1.409675] Performance Events: Fam17h+ core perfctr, AMD PMU driver. [ 1.413537] ... version: 0 [ 1.417532] ... bit width: 48 [ 1.421532] ... generic registers: 6 [ 1.425532] ... value mask: 0000ffffffffffff [ 1.429532] ... max period: 00007fffffffffff [ 1.433532] ... fixed-purpose events: 0 [ 1.437532] ... event mask: 000000000000003f [ 1.441596] rcu: Hierarchical SRCU implementation. [ 1.446253] smp: Bringing up secondary CPUs ... [ 1.449663] x86: Booting SMP configuration: [ 1.453539] .... node #0, CPUs: #1 [ 0.937207] kvm-clock: cpu 1, msr 124a01041, secondary cpu clock [ 1.455817] setup async PF for cpu 1 [ 1.457530] kvm-stealtime: cpu 1, msr fb8c6e080 [ 1.469534] #2 [ 0.937207] kvm-clock: cpu 2, msr 124a01081, secondary cpu clock [ 1.471039] setup async PF for cpu 2 [ 1.473530] kvm-stealtime: cpu 2, msr fb8cae080 [ 1.481657] #3 [ 0.937207] kvm-clock: cpu 3, msr 124a010c1, secondary cpu clock [ 1.485679] setup async PF for cpu 3 [ 1.489530] kvm-stealtime: cpu 3, msr fb8cee080 [ 1.497656] #4 [ 0.937207] kvm-clock: cpu 4, msr 124a01101, secondary cpu clock [ 1.499437] setup async PF for cpu 4 [ 1.501530] kvm-stealtime: cpu 4, msr fb8d2e080 [ 1.513649] #5 [ 0.937207] kvm-clock: cpu 5, msr 124a01141, secondary cpu clock [ 1.515060] setup async PF for cpu 5 [ 1.517530] kvm-stealtime: cpu 5, msr fb8d6e080 [ 1.525659] #6 [ 0.937207] kvm-clock: cpu 6, msr 124a01181, secondary cpu clock [ 1.529602] setup async PF for cpu 6 [ 1.533530] kvm-stealtime: cpu 6, msr fb8dae080 [ 1.541658] #7 [ 0.937207] kvm-clock: cpu 7, msr 124a011c1, secondary cpu clock [ 1.543028] setup async PF for cpu 7 [ 1.545530] kvm-stealtime: cpu 7, msr fb8dee080 [ 1.553662] #8 [ 0.937207] kvm-clock: cpu 8, msr 124a01201, secondary cpu clock [ 1.558560] setup async PF for cpu 8 [ 1.561530] kvm-stealtime: cpu 8, msr fb8e2e080 [ 1.569799] #9 [ 0.937207] kvm-clock: cpu 9, msr 124a01241, secondary cpu clock [ 1.573726] setup async PF for cpu 9 [ 1.577530] kvm-stealtime: cpu 9, msr fb8e6e080 [ 1.585658] #10 [ 0.937207] kvm-clock: cpu 10, msr 124a01281, secondary cpu clock [ 1.587067] setup async PF for cpu 10 [ 1.589530] kvm-stealtime: cpu 10, msr fb8eae080 [ 1.597671] #11 [ 0.937207] kvm-clock: cpu 11, msr 124a012c1, secondary cpu clock [ 1.602918] setup async PF for cpu 11 [ 1.605530] kvm-stealtime: cpu 11, msr fb8eee080 [ 1.613655] #12 [ 0.937207] kvm-clock: cpu 12, msr 124a01301, secondary cpu clock [ 1.617734] setup async PF fo
0
answers
0
votes
38
views
asked 5 days ago

Amplify auto generated resources

Is there some way to tell which auto generated resources (cognito user pools, lambdas, etc) go with which amplify project? Some config file or console page or query I can issue that will show me the "inventory" of all the auto generated components that "go with" any specific amplify project? Some details: I have created a number of test amplify projects, and deleted some but not others. Of course there were occasional failures. I want to delete stuff that is left over from failed project deletes, but how do I tell which pieces go with which amplify project so I can determine if I still need the piece or not? For instance, in addition to the cognito user pool associated with my auth (listed on the console page, and actually contains my users and groups so I know it's the right one), there are a number named "amplify_backend_manager_<<some-id>>". Not sure what they are for, or whether I still need any/all of them. Also, in addition to the one lambda I explicitly created, there are a number named something indicating their purpose (such as "amplify-login-create-auth-challenge", "amplify-login-custom-message", "amplify-login-define-auth-challenge", "amplify-login-verify-auth-challenge") followed by some sort of id. Each id has all of the associated lambdas; there are at least 5 different sets... but I only have 1 amplify project at this point. Not sure which ones go with my project. Can't seem to find an answer in the docs. Any help would be appreciated. Thanks!
1
answers
1
votes
11
views
asked 7 days ago
  • 1
  • 90 / page