By using AWS re:Post, you agree to the Terms of Use
/Developer Tools/

Questions tagged with Developer Tools

Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Invalid security token error when executing nested step function on Step Functions Local

Are nested step functions supported on AWS Step Functions Local? I am trying to create 2 step functions, where the outer one executes the inner one. However, when trying to execute the outer step function, getting an error: "The security token included in the request is invalid". To reproduce, use the latest `amazon/aws-stepfunctions-local:1.10.1` Docker image. Launch the container with the following command: ```sh docker run -p 8083:8083 -e AWS_DEFAULT_REGION=us-east-1 -e AWS_ACCESS_KEY_ID=TESTID -e AWS_SECRET_ACCESS_KEY=TESTKEY amazon/aws-stepfunctions-local ``` Then create a simple HelloWorld _inner_ step function in the Step Functions Local container: ```sh aws stepfunctions --endpoint-url http://localhost:8083 create-state-machine --definition "{\ \"Comment\": \"A Hello World example of the Amazon States Language using a Pass state\",\ \"StartAt\": \"HelloWorld\",\ \"States\": {\ \"HelloWorld\": {\ \"Type\": \"Pass\",\ \"End\": true\ }\ }}" --name "HelloWorld" --role-arn "arn:aws:iam::012345678901:role/DummyRole" ``` Then add a simple _outer_ step function that executes the HelloWorld one: ```sh aws stepfunctions --endpoint-url http://localhost:8083 create-state-machine --definition "{\ \"Comment\": \"OuterTestComment\",\ \"StartAt\": \"InnerInvoke\",\ \"States\": {\ \"InnerInvoke\": {\ \"Type\": \"Task\",\ \"Resource\": \"arn:aws:states:::states:startExecution\",\ \"Parameters\": {\ \"StateMachineArn\": \"arn:aws:states:us-east-1:123456789012:stateMachine:HelloWorld\"\ },\ \"End\": true\ }\ }}" --name "HelloWorldOuter" --role-arn "arn:aws:iam::012345678901:role/DummyRole" ``` Finally, start execution of the outer Step Function: ```sh aws stepfunctions --endpoint-url http://localhost:8083 start-execution --state-machine-arn arn:aws:states:us-east-1:123456789012:stateMachine:HelloWorldOuter ``` The execution fails with the _The security token included in the request is invalid_ error in the logs: ``` arn:aws:states:us-east-1:123456789012:execution:HelloWorldOuter:b9627a1f-55ed-41a6-9702-43ffe1cacc2c : {"Type":"TaskSubmitFailed","PreviousEventId":4,"TaskSubmitFailedEventDetails":{"ResourceType":"states","Resource":"startExecution","Error":"StepFunctions.AWSStepFunctionsException","Cause":"The security token included in the request is invalid. (Service: AWSStepFunctions; Status Code: 400; Error Code: UnrecognizedClientException; Request ID: ad8a51c0-b8bf-42a0-a78d-a24fea0b7823; Proxy: null)"}} ``` Am I doing something wrong? Is any additional configuration necessary?
0
answers
0
votes
7
views
asked a day ago

Webdriver testcases are failing while setting connection

Trying to deploy a basic webdriver.io + nodejs test on devicefarm but always for ios device test cases are getting stuck at Testcase is failing: job arn: arn:aws:devicefarm:us-west-2:612756076442:job:02bd6c95-640d-43b3-82eb-6f618777ac73/1a6364f3-7528-44b1-afa1-d6c2dc51d881/00000 ``` 2022-05-04T22:40:58.353Z ERROR @wdio/runner: Error: Failed to create session. [0-0] Unable to connect to "http://localhost:4723/", make sure browser driver is running on that address. [0-0] If you use services like chromedriver see initialiseServices logs above or in wdio.log file as the service might had problems to start the driver. [0-0] at startWebDriverSession (/private/tmp/scratchY4h2F6.scratch/test-packagex5aknf/node_modules/integration/node_modules/webdriver/build/utils.js:72:15) [0-0] at process.processTicksAndRejections (node:internal/process/task_queues:95:5) [0-0] at async Function.newSession (/private/tmp/scratchY4h2F6.scratch/test-packagex5aknf/node_modules/integration/node_modules/webdriver/build/index.js:46:45) [0-0] at async remote (/private/tmp/scratchY4h2F6.scratch/test-packagex5aknf/node_modules/integration/node_modules/webdriverio/build/index.js:77:22) [0-0] at async Runner._startSession (/private/tmp/scratchY4h2F6.scratch/test-packagex5aknf/node_modules/integration/node_modules/@wdio/runner/build/index.js:223:56) ``` and for android testcases is asking for node version greater than 12 (after adding nvm install 18.1.0 in yaml)
0
answers
0
votes
1
views
asked 14 days ago

Emergency Floating Point Logic Error Repairs?

Dear Corretto, We have been encountering brickwall problems trying to interact with Oracle, the JCP and the OpenJDK in terms of Java floating point, since they are refusing to interact with us, discuss, or be persuaded. This leaves no options at all, in a necessary and obligate sitation. From what we have gathered, both ourselves and more widely, IEEE 754 has a blind spot in it. An incompletion where either right or wrong can creep in. This is towards the right hand side of float or double arithmetic and StrictMath method calls, where you can have a straddling value for the last decimal place. At that point there is presently room for confusion between the decimal and the binary; the decimal, which the human and further logic, mathematics or software needs, and the binary, which is converterted to and away from, that the computer needs to perform operations on. Since binary is for computers, and decimal is for humans, the most important root fact anyway, dealing with both of them means that you should deal with them one at a time, or convert them, entirely, one to the other. This is exactly what Java floating point does not do. It deliberately confuses the two, at the wrong time, with denormals and pronormals, at the last unit place in float and double decimal numbers, leading to what is accurately referred to as a 'floating point errors', even though no Java exception objects are thrown at the time. When the Java switch statement was enhanced, so that programmers could immediately switch via a String, and could also coelesce switch options using the -> operator, there was no split in Java because of an incompatability. The fact that people had cause to learn something different and new about switch was no problem either. The two compatability options, one of which was a technical enhancement, only improved the circumstances for everyone; the programmers, users and vendors. The BigInteger, BigDecimal, https://github.com/eobermuhlner big-math workarounds waste too much memory too much speed, both. The superior approach for floating point types, aside from problems with Arbitrary Precision on their own, is to just plain correct floating point errors by means of SSE hardware in the CPU floating point unit, the Maths Co-Processor, which almost all PCS have these days, at the point of 2022; in fact they have successors past SSE, since SSE itself is all the way up to version 4.2. The thing with a patch is that it is not the main stream, the primary product. People have the specific option to include one or not. Who could want or need a denormal or pronormal value, exactedly? Any speed difference between accurate or innacurate is negligible because of SSE and similar anyway. Why should all ubiquity Java Developers have no broader choice about how they react about an incomplete, therefore incorrect and flawed standard, and implementation? Present workarounds are only that, workarounds; while within float and double ranges, these workarounds are even slower and larger in RAM than needs be. A patch can leave the writing or reading or exchange of float and double between Java code spaces exactly the same, or enhanced, providing complete choice between two Java floating point operation modes. Most importantly, what should happen about floating point error correction should Oracle, OpenJDK, JCP et al, the ideal points for improvement on FP errors, never listen, and persist in leaving these errors and program operations problem intact? Surely it would be better for a downstream vendor to offer a patch for the problem, which is lower, even really no risk and apart from their mainstream OpenJDK and Open JRE, than this error problem remain ongoing and neglected in place, with its present crazy consequences, forever. If these are the needs and the circumstances, Corretto should still consider a special Floating Point patch for Corretto Java, despites everything so far?
0
answers
0
votes
2
views
asked a month ago

Java Floating Point Correction Patch in Corretto in a Compatible Manner.

Dear Corretto, This email is in regards to [https://github.com/corretto/corretto-18/issues/15](https://github.com/corretto/corretto-18/issues/15), which is closed at this time. I/we are not in a position to just leave this particular subject where last left, and we don't want the discussion or apprehension of this subject to remain closed. As we tried to describe on the beginning of our thread at [https://github.com/corretto/corretto-18/issues/15](https://github.com/corretto/corretto-18/issues/15) , Java floating point denormal and pronormal values, from arithmetic and StrictMath function calls, in terms of range accuracy, don't correspond to decimal or binary mathematics. Because they are not one or the other, they don't correspond to anything. IEEE 754, for these reasons, is incomplete, and trying to justify Java's present state due to IEEE 754 is a non sequitar, as is, more importantly, the present floating point state of OpenJDK, and Corretto, at least right now. -If just to start, how could correcting Java's default floating point behaviour possibly break compatibility? What on earth could correcting the present, strange, erroneous and inconsistent behaviour of Java floating point as it is now, possibly break compatibility with? We can't really see or think of one example, certainly one that is good for Java, and not a lower level language. Forgive our naïveté, but are there really any such real useful examples, such, at all, pertinent to a Java space, needing floating point errors without their base 10 range correction?!?! -Even this is not the main thrust of our request. Our request submission to Coretto still is for the sake of the implementation and release of a Coretto Java patch to itself. A patch can be included or omitted, installed or not, still allowing compatibility. But even with the inclusion of a patch, various switches or options for the runtime could be involved, to enable a changed floating point arithmetic or StrictMath, yet still allowing compatability, either totally, or in some desired partial or co-integral manner, which could still succeed in being previously compatible, if one pauses to think. :) We can't exactly allow for this discussion to be just halted, because we in fact are beginning to NEED corrected OpenJDK floating point arithmetic, and an equivalent corrected StrictMath, because of the 2D graphics and 3D visual graphics work we are now planning. The present workarounds are too slow, and waste too much memory. We need continuous float and double range accuracy and the other facilities of those earlier types. As will a large number of programmers or companies out there, who haven't come forward, or persisted. Can those involved at Corretto reconsider this matter, and implement and release a base 10 floating point arithmetic and StrictMath correction or mode varying patch for Corretto Java, for present and future versions, for its JDK and JRE, on all offered Correto platforms, now and into the future?
1
answers
0
votes
4
views
asked 2 months ago

AWS IoT Embedded SDK

Hello, To make my device development more straightforward, I'd like to use the [AWS IoT Device SDK Embedded C release 202108.00](https://github.com/aws/aws-iot-device-sdk-embedded-C/tree/202108.00#20210800). However, I am having trouble cross-compiling it for my platform (based on the BG77, using a Qualcomm version of Clang). I am unable to configure & build the project. Here is my configure command: ``` cmake -G Ninja -B build -S . -DCMAKE_TOOLCHAIN_FILE=path/to/bg77.cmake -DBUILD_DEMOS=OFF -DBUILD_TESTS=OFF -DINSTALL_PLATFORM_ABSTRACTIONS=OFF ``` And the error I am seeing is: ``` <trim> Downloading the Amazon Root CA certificate... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 1188 100 1188 0 0 20941 0 --:--:-- --:--:-- --:--:-- 21214 Downloading the Baltimore Cybertrust Root CA certificate... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 1262 100 1262 0 0 13008 0 --:--:-- --:--:-- --:--:-- 13010 -- Configuring done CMake Error: The following variables are used in this project, but they are set to NOTFOUND. Please set them or make sure they are set and tested correctly in the CMake files: OPENSSL_CRYPTO_LIBRARY (ADVANCED) linked by target "openssl_posix" in directory aws-iot-device-sdk-embedded-C/platform/posix/transport OPENSSL_SSL_LIBRARY (ADVANCED) linked by target "openssl_posix" in directory aws-iot-device-sdk-embedded-C/platform/posix/transport -- Generating done CMake Generate step failed. Build files cannot be regenerated correctly. ``` At this point, there is no `build.ninja` file generated, so I cannot build the project. The README in the repo says the following: ``` The follow table shows libraries that need to be installed in your system to run certain demos. If a dependency is not installed and cannot be built from source, demos that require that dependency will be excluded from the default all target. ``` What is the proper way to build this without these dependencies? Thank you, Jonathan
0
answers
0
votes
6
views
asked 2 months ago

EC2 instance can’t access the internet

Apparently, my EC2 instance can’t access the internet properly. Here is what happens when I try to install a Python module: `[ec2-user@ip-172-31-90-31 ~]$ pip3 install flask` `Defaulting to user installation because normal site-packages is not writeable` `WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x7fab198cbe10>: Failed to establish a new connection: [Errno 101] Network is unreachable')': /simple/flask/` etc. Besides, inbound ping requests to instances the Elastic IP fail (Request Timed Out). However, the website that is hosted on the same EC2 instance can be accessed using both http and https. The security group is configured as follows: the inbound rules are | Port range | Protocol | Source | | -------- | -------- | ---- | | 80 | TCP |0.0.0.0/0 | | 22 | TCP |0.0.0.0/0 | | 80 | TCP |::/0 | | 22 | TCP |::/0 | | 443 | TCP |0.0.0.0/0 | | 443 | TCP |::/0 | the outbound rules are | IP Version | Type | Protocol | Port range | Source | | ----------- | --------- | -------- | ------- | ------ | | IPv4 | All traffic | All | All | 0.0.0.0/0 | The ACL inbound rules are: | Type | Protocol | Port range | Source | Allow/Deny | | -------- | -------- | ---- | -------- | ---------- | | HTTP (80) | TCP (6) | 80 |0.0.0.0/0 | Allow | | SSH (22) | TCP (6) | 22 |0.0.0.0/0 | Allow | | HTTPS (443)| TCP (6) | 443 |0.0.0.0/0 | Allow | | All ICMP - IPv4 | ICMP (1) | All | 0.0.0.0/0 | Allow | | All trafic | All | All |0.0.0.0/0 | Deny | and the outbound rules are: | Type | Protocol | Port range | Source | Allow/Deny | | -------- | -------- | ------- | -------- | ---------- | | Custom TCP | TCP (6) | 1024 - 65535 | 0.0.0.0/0 | Allow | | HTTP (80) | TCP (6) | 80 |0.0.0.0/0 | Allow | | SSH (22) | TCP (6) | 22 |0.0.0.0/0 | Allow | | HTTPS (443) | TCP (6) | 443 |0.0.0.0/0 | Allow | |All ICMP - IPv4 | ICMP (1) | All | 0.0.0.0/0 | Allow | | All trafic | All | All |0.0.0.0/0 | Deny | This is what the route table associated with the subnet looks like: | Destination | Target | Status | Propagated | | ---------- | -------- | -------- | ---------- | | 172.31.0.0/16 | local | Active | No | | 0.0.0.0/0 | igw-09b554e4da387238c | Active | No | (no explicit or edge associations). As for the firewall, executing `sudo iptables –L` results in `Chain INPUT (policy ACCEPT)` `target prot opt source destination` `Chain FORWARD (policy ACCEPT)` `target prot opt source destination` `Chain OUTPUT (policy ACCEPT)` `target prot opt source destination` and `sudo iptables -L -t nat` gives `Chain PREROUTING (policy ACCEPT)` `target prot opt source destination` `Chain INPUT (policy ACCEPT)` `target prot opt source destination` `Chain OUTPUT (policy ACCEPT)` `target prot opt source destination` `Chain POSTROUTING (policy ACCEPT)` `target prot opt source destination` What am I missing here? Any suggestions or ideas on this would be greatly appreciated. Thanks
2
answers
0
votes
16
views
asked 2 months ago
1
answers
0
votes
7
views
asked 2 months ago

Setting MKL_NUM_THREADS to be more than 16 for m5 instances

Hey, I have a 32-core EC2 linux m5 instance. My python installed via anaconda. I notice that my numpy cannot use more than 16 cores. Looks like my numpy uses libmkl_rt.so: ``` [2]: np.show_config() blas_mkl_info: libraries = ['mkl_rt', 'pthread'] library_dirs = ['/home/ec2-user/anaconda3/lib'] define_macros = [('SCIPY_MKL_H', None), ('HAVE_CBLAS', None)] include_dirs = ['/home/ec2-user/anaconda3/include'] blas_opt_info: libraries = ['mkl_rt', 'pthread'] library_dirs = ['/home/ec2-user/anaconda3/lib'] define_macros = [('SCIPY_MKL_H', None), ('HAVE_CBLAS', None)] include_dirs = ['/home/ec2-user/anaconda3/include'] lapack_mkl_info: libraries = ['mkl_rt', 'pthread'] library_dirs = ['/home/ec2-user/anaconda3/lib'] define_macros = [('SCIPY_MKL_H', None), ('HAVE_CBLAS', None)] include_dirs = ['/home/ec2-user/anaconda3/include'] lapack_opt_info: libraries = ['mkl_rt', 'pthread'] library_dirs = ['/home/ec2-user/anaconda3/lib'] define_macros = [('SCIPY_MKL_H', None), ('HAVE_CBLAS', None)] include_dirs = ['/home/ec2-user/anaconda3/include'] ``` When I tried to set MKL_NUM_THREADS below 16, it works ``` (base) ec2-user@ip-172-31-18-3:~$ export MKL_NUM_THREADS=12 && python -c "import ctypes; mkl_rt = ctypes.CDLL('libmkl_rt.so'); print (mkl_rt.mkl_get_max_threads())" 12 ``` When I tried to set it to 24, it stops at 16 ``` (base) ec2-user@ip-172-31-18-3:~$ export MKL_NUM_THREADS=24 && python -c "import ctypes; mkl_rt = ctypes.CDLL('libmkl_rt.so'); print (mkl_rt.mkl_get_max_threads())" 16 ``` But I do have 32 cores ``` In [2]: os.cpu_count() Out[2]: 32 ``` Is there any other settings I need to check? Thanks, Bill
3
answers
0
votes
4
views
asked 3 months ago

CDK on local environment issue

Whenever I attempt to run a cdk command on my local machine, I receive the following error. For context, I am running CDK v2, I have a Windows device, I have Python 3.7.9 and running cdk --version returns a 2.19.0. I have attempted uninstalling and reinstalling CDK multiple times. I would appreciate anyone's help. The same CDK repository works for my 2 other teammates as well. ``` Traceback (most recent call last): File "app.py", line 4, in <module> import aws_cdk as cdk File "C:\Users\dalade\AppData\Local\Programs\Python\Python37\lib\site-packages\aws_cdk\__init__.py", line 24257, in <module> from . import aws_apigateway File "C:\Users\dalade\AppData\Local\Programs\Python\Python37\lib\site-packages\aws_cdk\aws_apigateway\__init__.py", line 1549, in <module> from ..aws_certificatemanager import ICertificate as _ICertificate_c194c70b File "C:\Users\dalade\AppData\Local\Programs\Python\Python37\lib\site-packages\aws_cdk\aws_certificatemanager\__init__.py", line 184, in <module> from ..aws_cloudwatch import ( File "C:\Users\dalade\AppData\Local\Programs\Python\Python37\lib\site-packages\aws_cdk\aws_cloudwatch\__init__.py", line 500, in <module> from ..aws_iam import Grant as _Grant_a7ae64f8, IGrantable as _IGrantable_71c4f5de File "C:\Users\dalade\AppData\Local\Programs\Python\Python37\lib\site-packages\aws_cdk\aws_iam\__init__.py", line 654, in <module> "policy_dependable": "policyDependable", File "C:\Users\dalade\AppData\Local\Programs\Python\Python37\lib\site-packages\aws_cdk\aws_iam\__init__.py", line 662, in AddToPrincipalPolicyResult policy_dependable: typing.Optional[constructs.IDependable] = None, AttributeError: module 'constructs' has no attribute 'IDependable' Subprocess exited with error 1 ```
1
answers
0
votes
25
views
asked 3 months ago

credentials working with CLI but not with Java SDK

I'm having trouble getting a set of credentials to work with the Java SDK, when they work with the CLI. Background - I had some code working on an AWS Elastic Beanstalk instance, where I was setting the environment variables "aws.accessKeyId" and "aws.secretKey", and using the SystemPropertiesCredentialsProvider to build clients for accessing SQS, S3, etc. Following a security review by an internal team, I'm attempting to update this to use a different method of finding credentials - namely, storing the credentials in an external file, instead of an environment variable. To that end, here's what I've done: 1. I'm using an IAM user on this account, which belongs to a group that has the AmazonSQSFullAccess policy (among others) attached. This is unchanged from the working version of my app, where I had the same user but just a different credentials provider. 2. I have regenerated the security credentials for this user and verified it is active. 3. To test, I have the following set up locally in my ~/.aws/credentials file: ``` [sdk_temp_test] aws_access_key_id = <redacted> aws_secret_access_key = <redacted> ``` and ~/.aws/config file: ``` [profile sdk_temp_test] region = us-east-1 ``` 4. At a shell prompt, if I then do "export AWS_PROFILE=sdk_temp_test" I can run the following commands that show that the credentials work and are able to access basic SQS functionality - I'm not including the output here, but the returned data from the following commands shows that I am calling CLI functions as the user I expect, and I am retrieving the queues I expect to see in the us-east-1 region for this account. ``` aws sts get-caller-identity aws sqs list-queues ``` So far, so good. However, I then attempt to do something like the following: 5. create a file called "localtest.properties" that contains the following and is accessible on the classpath of my Java application: ``` accessKey="<redacted>" secretKey="<redacted>" ``` 6. run code like so, this is in a standalone example that illustrates the problem: ``` AWSCredentialsProvider provider = new ClasspathPropertiesFileCredentialsProvider("localtest.proper ties"); AWSCredentials credentials = provider.getCredentials(); String accessKeyId = credentials.getAWSAccessKeyId(); String secret = credentials.getAWSSecretKey(); System.out.println("accesskey is '" + accessKeyId + "'; secret is '" + secret + "'"); AmazonSQSClient client = (AmazonSQSClient)AmazonSQSClientBuilder.standard() .withRegion(Regions.US_EAST_1) .withCredentials(provider) .build(); System.out.println("LIST QUEUES TEST"); ListQueuesResult lqr = client.listQueues(); ``` This debug line correctly prints out the credentials I expect. But when attempting to listQueues it throws the following exception: ``` Exception in thread "main" com.amazonaws.services.sqs.model.AmazonSQSException: The security token included in the request is invalid. (Service: AmazonSQS; Status Code: 403; Error Code: InvalidClientTokenId; Request ID: <redacted>; Proxy: null) ``` So I'm a little stuck. The credentials are good, b/c they work on my CLI test. My code I think is ok; I am just switching my credentials provider. And the new provider appears to be finding the correct credentials based on the output debugging. But - put it all together, and it is not working for me when trying an SDK call, as I get that exception. How do I troubleshoot this? Is it possible to get more details beyond the "InvalidClientTokenId" - what specifically is wrong? Can I look up the request ID somewhere to troubleshoot? Does the ClasspathPropertiesFileCredentialsProvider need something that the SystemPropertiesCredentialsProvider I used to use did not? I opened a ticket with AWS support and they said SDK issues were a little out of scope; they pointed me towards articles on the credentials chain, and some sample code for the NodeJS SDK which is structured a little differently. re: the credentials chain, I think with a custom provider I should bypass that? Just in case, I've ensured there are no environment variables like AWS_ACCESS_KEY_ID, no Java properties like aws.accessKeyId, I've even temporarily deleted my ~/.aws/credentials and config files while running the above code, to make sure that no other credential is "sneaking in", but I still get the same exception. I do get some warnings while running the above Java code: ``` Feb 21, 2022 12:00:45 PM com.amazonaws.auth.profile.internal.BasicProfileConfigLoader loadProfiles WARNING: Your profile name includes a 'profile ' prefix. This is considered part of the profile name in the Java SDK, so you will need to include this prefix in your profile name when you reference this profile from your Java code. (this repeats a number of times) WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by com.amazonaws.util.XpathUtils (file:/Users/tfeiler/.m2/repository/com/amazonaws/aws-java-sdk-core/1.11.964/aws-java-sdk-core-1.11.964.jar) to method com.sun.org.apache.xpath.internal.XPathContext.getDTMManager() WARNING: Please consider reporting this to the maintainers of com.amazonaws.util.XpathUtils WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release ( this occurs right before the exception is thrown) ``` They are just warnings, and I think the profile prefix warning is related to the entry in my ~/.aws/config file; so I don't think this is related to my problem, but including it here just in case. Anyone got any advice on things to try or how to troubleshoot this?
1
answers
0
votes
56
views
asked 3 months ago

Can't change Amazon Recovery Controller controls states with .net SDK

I am trying to run an application that can change the control states using the .net SDK . I am getting the cluster endpoints from the DescribeClusterAsync method on the AmazonRoute53RecoveryControlConfigClient class with no issues. Then I attempt to run this code where I setup the AmazonRoute53RecoveryClusterClient : ``` AmazonRoute53RecoveryClusterConfig clusterRecoveryConfig = new AmazonRoute53RecoveryClusterConfig(); clusterRecoveryConfig.RegionEndpoint = RegionEndpoint.GetBySystemName(clusterEndpoint.Region); AmazonRoute53RecoveryClusterClient client = new AmazonRoute53RecoveryClusterClient(_awsCredentials, clusterRecoveryConfig); ``` When I attempt to execute the GetRoutingControlStateAsync on the client: ``` await GetRoutingControlStateAsync(request); ``` I get an error stating: **The requested name is valid, but no data of the requested type was found.** I have tried removing the Region and passing a cluster endpoint to ServiceURL but then I get this error: **The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details.** I think I need to pass in the url endpoint to the client, but am unsure how to do it. I have a stack overflow question on this same topic here: https://stackoverflow.com/questions/71042116/aws-route53-recovery-controller-error-when-getting-or-updating-the-control-state I see in the java example you can set the region and the data plane url endpoint, but I don't see the equivalent in .net. https://docs.aws.amazon.com/r53recovery/latest/dg/example_route53-recovery-cluster_UpdateRoutingControlState_section.html This works when I use the cli which I can also set the region and url endpoint. https://docs.aws.amazon.com/r53recovery/latest/dg/getting-started-cli-routing.control-state.html What am I doing wrong here? Any guidance would be greatly appreciated.
3
answers
0
votes
6
views
asked 3 months ago

Why can't I install python logging library on Linux2 instance

Just started a new instance to run my python3 script. I need several libraries which I can install with pip3 (pip3 install requests runs well) but I can't get logging library installed. I have this output: $ pip3 install logging > ``` Defaulting to user installation because normal site-packages is not writeable Collecting logging Using cached logging-0.4.9.6.tar.gz (96 kB) ERROR: Command errored out with exit status 1: command: /usr/bin/python3 -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-_n141mbi/logging/setup.py'"'"'; __file__='"'"'/tmp/pip-install-_n141mbi/logging/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-pip-egg-info-pxlnpi1y cwd: /tmp/pip-install-_n141mbi/logging/ Complete output (48 lines): running egg_info creating /tmp/pip-pip-egg-info-pxlnpi1y/logging.egg-info writing /tmp/pip-pip-egg-info-pxlnpi1y/logging.egg-info/PKG-INFO writing dependency_links to /tmp/pip-pip-egg-info-pxlnpi1y/logging.egg-info/dependency_links.txt writing top-level names to /tmp/pip-pip-egg-info-pxlnpi1y/logging.egg-info/top_level.txt writing manifest file '/tmp/pip-pip-egg-info-pxlnpi1y/logging.egg-info/SOURCES.txt' Traceback (most recent call last): File "<string>", line 1, in <module> File "/tmp/pip-install-_n141mbi/logging/setup.py", line 13, in <module> packages = ["logging"], File "/usr/lib64/python3.7/distutils/core.py", line 148, in setup dist.run_commands() File "/usr/lib64/python3.7/distutils/dist.py", line 966, in run_commands self.run_command(cmd) File "/usr/lib64/python3.7/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/usr/lib/python3.7/site-packages/setuptools/command/egg_info.py", line 297, in run self.find_sources() File "/usr/lib/python3.7/site-packages/setuptools/command/egg_info.py", line 304, in find_sources mm.run() File "/usr/lib/python3.7/site-packages/setuptools/command/egg_info.py", line 535, in run self.add_defaults() File "/usr/lib/python3.7/site-packages/setuptools/command/egg_info.py", line 571, in add_defaults sdist.add_defaults(self) File "/usr/lib64/python3.7/distutils/command/sdist.py", line 226, in add_defaults self._add_defaults_python() File "/usr/lib/python3.7/site-packages/setuptools/command/sdist.py", line 135, in _add_defaults_python build_py = self.get_finalized_command('build_py') File "/usr/lib64/python3.7/distutils/cmd.py", line 298, in get_finalized_command cmd_obj = self.distribution.get_command_obj(command, create) File "/usr/lib64/python3.7/distutils/dist.py", line 857, in get_command_obj klass = self.get_command_class(command) File "/usr/lib/python3.7/site-packages/setuptools/dist.py", line 768, in get_command_class self.cmdclass[command] = cmdclass = ep.load() File "/usr/lib/python3.7/site-packages/pkg_resources/__init__.py", line 2461, in load return self.resolve() File "/usr/lib/python3.7/site-packages/pkg_resources/__init__.py", line 2467, in resolve module = __import__(self.module_name, fromlist=['__name__'], level=0) File "/usr/lib/python3.7/site-packages/setuptools/command/build_py.py", line 16, in <module> from setuptools.lib2to3_ex import Mixin2to3 File "/usr/lib/python3.7/site-packages/setuptools/lib2to3_ex.py", line 13, in <module> from lib2to3.refactor import RefactoringTool, get_fixers_from_package File "/usr/lib64/python3.7/lib2to3/refactor.py", line 19, in <module> import logging File "/tmp/pip-install-_n141mbi/logging/logging/__init__.py", line 618 raise NotImplementedError, 'emit must be implemented '\ ^ SyntaxError: invalid syntax ---------------------------------------- ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output. ``` I can't understand why this happens. Anybody has an idea how to install logging ? Thanks
1
answers
0
votes
10
views
asked 3 months ago

CDK and Route 53 Failover

We found a quite useful CDK article. It’s using a Lambda, Route 53 “A record”, and more. It’s titled “AWS CDK: Use Lambda with Application Load Balancer” https://sbstjn.com/blog/aws-cdk-lambda-loadbalancer-vpc-certificate/ However, the article does not cover failover. The straight forward question would be “what changes to this article would be made so that Active-passive failover between regions is supported?“ I understand a Route 53 “A record” has the "Failover routing policy" with which one can set up an active-passive failover setup. Hypothetically: if us-east-1 is down, it would automatically switch us to us-east-2. Items of note: - Unless I missed it, the latest CDK ARecord (2.10.0) does not seem to support configuring the ARecord for failover. https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_route53.ARecord.html - I see that the ability to set the routing policy for an ARecord was requested in 2019 https://github.com/aws/aws-cdk/issues/4391 which would cover a superset of what we need. The comments mention using CfnRecordSet. Is that currently the best way? - Top level concepts from the article are: Lambda with handler code, LambdaTarget, ApplicationLoadBalancer, Certificate, and Route-53-A-Record (IPv4 DNS). Other related resources: - https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html - https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover-configuring.html - https://aws.amazon.com/blogs/networking-and-content-delivery/lambda-functions-as-targets-for-application-load-balancers/ Any insight into how to implement failover using CDK, would be quite welcome. Thanks! # Update Feb 5, 2022 Still hoping for an optimal solution. For now, trying to wrestle with **CfnRecordSet**. CfnRecordSet properties of setIdentifier, **aliasTarget ** with **evaluateTargetHealth** (Evaluate Target Health), and **failover** seem to be key. **Evaluate Target Health related docs that we are looking at:** * [Route 53 RecordSet "Evaluate Target Health" via CloudFormation template](https://forums.aws.amazon.com/thread.jspa?threadID=133969) * which points at https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-route53-recordset.html#cfn-route53-recordset-aliastarget * and points at https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-route53-aliastarget.html#cfn-route53-aliastarget-evaluatetargethealth
0
answers
0
votes
5
views
asked 4 months ago

How to allign the Name Servers of Route53 Public Hosted Zone and Registered Domains in CDK?

In my account is an "registered domain" with 4 nameservers assigned to it. To originally created Public Hosted Zone was deleted as I want it to be created with the CDK deployment. When deploying the Route53 Public Hosted Zone it is each time creating a new set of Name Servers. Solution s then to update the Name Servers of the registered domain manually. This issue is described in the following link as well together with an manual fix: https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/domain-replace-hosted-zone.html **Problem:** I don't want to change the Name Servers manually, but directly with CDK. * I didnt see any possibility in the Constructor to set the Hosted Zones when deploying https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_route53.PublicHostedZone.html * Neither i saw how to change the name servers of the registered domain automatically via CDK * When creating a ZoneDelegationRecord (see code in the following) i could add the target Name Servers but it fails during build **Question:** How to create a Public Hosted Zone and link it to a registered Domain with both sharing the same Name Servers via CDK? **Code:** ``` import * as route53 from 'aws-cdk-lib/aws-route53'; import { RemovalPolicy, Tags, Duration } from 'aws-cdk-lib'; const route53PublicHostedZone = new route53.PublicHostedZone(scope, 'route53PublicHostedZone', { zoneName: 'myWhatEverDomain.com', }); const zoneDelegationRecord = new route53.ZoneDelegationRecord(scope, 'MyZoneDelegationRecord', { // real NS Names are different of course nameServers: [ 'ns-1.awsdns-1.org', 'ns-2.awsdns-2.net', 'ns-3.awsdns-3.co.uk', 'ns-4.awsdns-4.com'], zone: route53PublicHostedZone, ttl: Duration.minutes(1), }); ``` **Error:** ``` Failed resources: myCdkStack | 16:53:12 | CREATE_FAILED | AWS::Route53::RecordSet | MyZoneDelegationRecord (MyZoneDelegationRecordD1ECAA29) [Tried to create resource record set [name='myWhatEverDomain.com.', type='NS'] but it already exists] myCdkStack failed: Error: The stack named myCdkStack failed creation, it may need to be manually deleted from the AWS console: ROLLBACK_COMPLETE at Object.waitForStackDeploy (C:\Users\myUser\AppData\Roaming\npm\node_modules\aws-cdk\lib\api\util\cloudformation.ts:307:11) at processTicksAndRejections (node:internal/process/task_queues:96:5) at prepareAndExecuteChangeSet (C:\Users\myUser\AppData\Roaming\npm\node_modules\aws-cdk\lib\api\deploy-stack.ts:355:26) at CdkToolkit.deploy (C:\Users\myUser\AppData\Roaming\npm\node_modules\aws-cdk\lib\cdk-toolkit.ts:201:24) at initCommandLine (C:\Users\myUser\AppData\Roaming\npm\node_modules\aws-cdk\bin\cdk.ts:281:9) The stack named pmyCdkStack failed creation, it may need to be manually deleted from the AWS console: ROLLBACK_COMPLETE ```
1
answers
0
votes
20
views
asked 4 months ago

aws-sdk V3 timeout in lambda

Hello, I'm using NodeJS 14.x lambda to control an ecs service. As I do not need the ecs task to run permanently, I created a service inside the cluster so I can play around the desired count to start or stop it at will. I also created two lambdas, one for querying the current desired count and the current Public IP, another one for updating said desired count (to 0 or 1 should I want to start or stop it) I have packed aws-sdk v3 on a lambda layer to not have to package it on each lambda. Seems to work fine as I was getting runtime error > "Runtime.ImportModuleError: Error: Cannot find module '@aws-sdk/client-ecs'" But I do not anymore. The code is also working fine from my workstation as I'm able to execute it locally and I get the desired result (query to ecs api works fine) But All I get when testing from lambdas are Timeouts... It usually execute in less than 3 secondes on my local workstation but even with a lambda timeout set up at 3 minutes, this is what I get ``` START RequestId: XXXX-XX-XXXX Version: $LATEST 2022-01-11T23:57:59.528Z XXXX-XX-XXXX INFO before ecs client send END RequestId: XXXX-XX-XXXX REPORT RequestId: XXXX-XX-XXXX Duration: 195100.70 ms Billed Duration: 195000 ms Memory Size: 128 MB Max Memory Used: 126 MB Init Duration: 1051.68 ms 2022-01-12T00:01:14.533Z XXXX-XX-XXXX Task timed out after 195.10 seconds ``` The message `before ecs client send` is a console.log I made just before the ecs.send request for debug purposes I think I've set up the policy correctly, as well as the Lambda VPC with the default outbound rule to allow all protocol on all port to 0.0.0.0/0 so I I have no idea on where to look now. I have not found any way to debug aws-sdk V3 calls like you would do on V2 by adding a logger to the config. Maybe it could help understanding the issue....
1
answers
0
votes
8
views
asked 4 months ago

CDK with typescript - error on cloud9

Hello Everyone, I tried https://github.com/fortejas/example-serverless-python-api on a cloud9 environment but I got the following error Commands that I used to setup: ``` mkdir sample-api cd sample-api/ cdk init app --language typescript . cd ~ git clone https://github.com/kasukur/example-serverless-python-api.git ls -lrt example-serverless-python-api/ cp -rf example-serverless-python-api/lambda-api/ ~/environment/sample-api/. cd ~/environment/sample-api/ Delete node_modules folder Delete package-lock.json npm i @aws-cdk/aws-lambda-python-alpha --force -g 
ec2-user:~/environment/sample-api $ cdk deploy ``` the error is ``` ec2-user:~/environment/sample-api $ cdk synth npm WARN exec The following package was not found and will be installed: ts-node /home/ec2-user/.npm/_npx/1bf7c3c15bf47d04/node_modules/ts-node/src/index.ts:750 return new TSError(diagnosticText, diagnosticCodes); ^ TSError: ⨯ Unable to compile TypeScript: bin/sample-api.ts:4:10 - error TS2305: Module '"../lib/sample-api-stack"' has no exported member 'SampleApiStack'. 4 import { SampleApiStack } from '../lib/sample-api-stack'; ~~~~~~~~~~~~~~ at createTSError (/home/ec2-user/.npm/_npx/1bf7c3c15bf47d04/node_modules/ts-node/src/index.ts:750:12) at reportTSError (/home/ec2-user/.npm/_npx/1bf7c3c15bf47d04/node_modules/ts-node/src/index.ts:754:19) at getOutput (/home/ec2-user/.npm/_npx/1bf7c3c15bf47d04/node_modules/ts-node/src/index.ts:941:36) at Object.compile (/home/ec2-user/.npm/_npx/1bf7c3c15bf47d04/node_modules/ts-node/src/index.ts:1243:30) at Module.m._compile (/home/ec2-user/.npm/_npx/1bf7c3c15bf47d04/node_modules/ts-node/src/index.ts:1370:30) at Module._extensions..js (node:internal/modules/cjs/loader:1153:10) at Object.require.extensions.<computed> [as .ts] (/home/ec2-user/.npm/_npx/1bf7c3c15bf47d04/node_modules/ts-node/src/index.ts:1374:12) at Module.load (node:internal/modules/cjs/loader:981:32) at Function.Module._load (node:internal/modules/cjs/loader:822:12) at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:81:12) { diagnosticText: `\x1B[96mbin/sample-api.ts\x1B[0m:\x1B[93m4\x1B[0m:\x1B[93m10\x1B[0m - \x1B[91merror\x1B[0m\x1B[90m TS2305: \x1B[0mModule '"../lib/sample-api-stack"' has no exported member 'SampleApiStack'.\n` + '\n' + "\x1B[7m4\x1B[0m import { SampleApiStack } from '../lib/sample-api-stack';\n" + '\x1B[7m \x1B[0m \x1B[91m ~~~~~~~~~~~~~~\x1B[0m\n', diagnosticCodes: [ 2305 ] } Subprocess exited with error 1 ``` Could someone please help with this Thank you
1
answers
0
votes
48
views
asked 4 months ago

LocalDynamoDb intermittently generates HTTP 500 errors

For a big project we use DynamoLocalDb in our unit tests. Most of the times, these tests pass. The code also works as expected on our production environments, where we use the "real" dynamodb that's part of the VPC. However, sometimes the unit tests fail. Particularly when calling `putItem()` we sometimes get the following exception: ```txt The request processing has failed because of an unknown error, exception or failure. (Service: DynamoDb, Status Code: 500, Request ID: db23be5e-ae96-417b-b268-5a1433c8c125, Extended Request ID: null) software.amazon.awssdk.services.dynamodb.model.DynamoDbException: The request processing has failed because of an unknown error, exception or failure. (Service: DynamoDb, Status Code: 500, Request ID: db23be5e-ae96-417b-b268-5a1433c8c125, Extended Request ID: null) at software.amazon.awssdk.services.dynamodb.model.DynamoDbException$BuilderImpl.build(DynamoDbException.java:95) at software.amazon.awssdk.services.dynamodb.model.DynamoDbException$BuilderImpl.build(DynamoDbException.java:55) at software.amazon.awssdk.protocols.json.internal.unmarshall.AwsJsonProtocolErrorUnmarshaller.unmarshall(AwsJsonProtocolErrorUnmarshaller.java:89) at software.amazon.awssdk.protocols.json.internal.unmarshall.AwsJsonProtocolErrorUnmarshaller.handle(AwsJsonProtocolErrorUnmarshaller.java:63) at software.amazon.awssdk.protocols.json.internal.unmarshall.AwsJsonProtocolErrorUnmarshaller.handle(AwsJsonProtocolErrorUnmarshaller.java:42) at software.amazon.awssdk.core.http.MetricCollectingHttpResponseHandler.lambda$handle$0(MetricCollectingHttpResponseHandler.java:52) at software.amazon.awssdk.core.internal.util.MetricUtils.measureDurationUnsafe(MetricUtils.java:64) at software.amazon.awssdk.core.http.MetricCollectingHttpResponseHandler.handle(MetricCollectingHttpResponseHandler.java:52) at software.amazon.awssdk.core.internal.http.async.AsyncResponseHandler.lambda$prepare$0(AsyncResponseHandler.java:89) at java.base/java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:1072) at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506) at java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2073) at software.amazon.awssdk.core.internal.http.async.AsyncResponseHandler$BaosSubscriber.onComplete(AsyncResponseHandler.java:132) at java.base/java.util.Optional.ifPresent(Optional.java:183) at software.amazon.awssdk.http.crt.internal.AwsCrtResponseBodyPublisher.completeSubscriptionExactlyOnce(AwsCrtResponseBodyPublisher.java:216) at software.amazon.awssdk.http.crt.internal.AwsCrtResponseBodyPublisher.publishToSubscribers(AwsCrtResponseBodyPublisher.java:281) at software.amazon.awssdk.http.crt.internal.AwsCrtAsyncHttpStreamAdapter.onResponseComplete(AwsCrtAsyncHttpStreamAdapter.java:114) at software.amazon.awssdk.crt.http.HttpStreamResponseHandlerNativeAdapter.onResponseComplete(HttpStreamResponseHandlerNativeAdapter.java:33) ``` Our software is written in Kotlin, version 1.5.31 and the project is build with maven. The DynamoDbLocal version we use is 1.16.0. We use amazon sdk 2.16.67. Our DynamoLocalDb is spun up inside our unit tests as follows: ```kotlin val url: String by lazy { System.setProperty("sqlite4java.library.path", "target/dynamo-native-libs") System.setProperty("aws.accessKeyId", "test-access-key") System.setProperty("aws.secretAccessKey", "test-secret-key") val port = randomFreePort() logger.info { "Creating local in-memory Dynamo server on port $port" } val instance = ServerRunner.createServerFromCommandLineArgs(arrayOf("-inMemory", "-port", port.toString())) try { instance.safeStart() } catch (e: Exception) { instance.stop() fail("Could not start Local Dynamo Server on port $port.", e) } Runtime.getRuntime().addShutdownHook(object : Thread() { override fun run() { logger.debug("Stopping Local Dynamo Server on port $port") instance.stop() } }) "http://localhost:$port" } ``` Our client is created with: ```kotlin val client: DynamoDbAsyncClientWrapper by lazy { DynamoDbAsyncClientWrapper( DynamoDbAsyncClient.builder() .region(Region.EU_WEST_1) .credentialsProvider(DefaultCredentialsProvider.builder().build()) .endpointOverride(URI.create(url)) .httpClientBuilder(AwsCrtAsyncHttpClient.builder()) .build() ) } ``` The code for our Kotlin Dynamo Wrapper DSL is open sourced an available here: https://github.com/ximedes/kotlin-dynamodb-wrapper The information in the stacktrace thrown by DynamoLocalDb is uninformative, and the asynchronous nature of the code also does not give a good hint as to where this error originated. We have tried several changes to our code, but we are running out of options. We are looking for a possible cause of this intermittent problem, or a way to reliably reproduce it.
1
answers
0
votes
40
views
asked 4 months ago
1
answers
1
votes
8
views
asked 5 months ago

Why won't the CDK let me divide my network?

## Problem I am trying to use CDK for the first time and trying to divide a `10.0.0.0/24` VPC into 8 /27 subnets with 4 public and 4 private subnets spanning no more than 4 Availability Zones. When I run `cdk deploy` I am receiving the following error. ``` Error: 1 of /27 exceeds remaining space of 10.0.0.0/24 ``` Multiple websites have displayed that I can split the network this way. * https://www.davidc.net/sites/default/subnets/subnets.html * http://jodies.de/ipcalc?host=10.0.0.0&mask1=24&mask2=27 I know that AWS reserves 5 IP addresses from each subnet, but that should still leave 25 hosts per subnet, which is plenty for my exercise. ---- ## Code ``` new ec2.Vpc(this, 'SimpleVpc', { cidr: '10.0.0.0/24', maxAzs: 4, natGateways: 1, subnetConfiguration: SimpleVpcStack.createSubnets(SubnetType.PUBLIC).concat( SimpleVpcStack.createSubnets(SubnetType.PRIVATE_WITH_NAT)) }); private static createSubnets(type: SubnetType): ec2.SubnetConfiguration[] { const label = SubnetType.PUBLIC === type ? 'pub' : 'pvt'; const subnets: ec2.SubnetConfiguration[] = []; for(let i = 1; i < 5; i++){ subnets.push({ cidrMask: 27, name: `${label}-${i}`, subnetType: type }); } return subnets; } ``` ---- ## Logs ``` subnets [ { cidrMask: 27, name: 'pub-1', subnetType: 'Public' }, { cidrMask: 27, name: 'pub-2', subnetType: 'Public' }, { cidrMask: 27, name: 'pub-3', subnetType: 'Public' }, { cidrMask: 27, name: 'pub-4', subnetType: 'Public' }, { cidrMask: 27, name: 'pvt-1', subnetType: 'Private' }, { cidrMask: 27, name: 'pvt-2', subnetType: 'Private' }, { cidrMask: 27, name: 'pvt-3', subnetType: 'Private' }, { cidrMask: 27, name: 'pvt-4', subnetType: 'Private' } ] ```
1
answers
0
votes
21
views
asked 5 months ago

How to call the Values function in aws-sdk-go-v2?

I originally asked this question [on StackOverflow](https://stackoverflow.com/questions/69692732/how-to-call-the-values-function-in-aws-sdk-go-v2) but never received any reply. No one from AWS seems to be monitoring aws-sdk-go questions on StackOverflow. So I want to try again here. It would be great if more AWS employees could monitor re:Post and answer SDK questions. I am writing a program that uses aws-sdk-go-v2 and receives a string input from the user that determines what storage class to use when storing an object in S3. I have to validate that the input is an allowed value, and if it is not, I give a list of allowed values. In v1 of aws-sdk-go, you could call [`s3.StorageClass_Values()`](https://docs.aws.amazon.com/sdk-for-go/api/service/s3/#StorageClass_Values) to enumerate the allowed `StorageClass` values. ```golang func StorageClass_Values() []string ``` Example: ```golang // v1.go package main import ( "fmt" "github.com/aws/aws-sdk-go/service/s3" ) func main() { fmt.Println(s3.StorageClass_Values()) } ``` ``` $ go run v1.go [STANDARD REDUCED_REDUNDANCY STANDARD_IA ONEZONE_IA INTELLIGENT_TIERING GLACIER DEEP_ARCHIVE OUTPOSTS] ``` But in aws-sdk-go-v2, types were introduced for StorageClass and the function that enumerates the values requires a type to be called. From [the docs](https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/service/s3@v1.17.0/types#StorageClass.Values): ```golang func (StorageClass) Values() []StorageClass ``` This seems to require an initialized variable to call? Why is this the case? What's the idiomatic way to call this function? I've managed to get it to work in two different ways, and both seem wrong. ```golang // v2.go package main import ( "fmt" s3Types "github.com/aws/aws-sdk-go-v2/service/s3/types" ) func main() { // Create uninitialized StorageClass variable and call .Values() var sc s3Types.StorageClass fmt.Println(sc.Values()) // One-liner that uses one of the types directly: fmt.Println(s3Types.StorageClassStandard.Values()) } ``` ``` $ go run v2.go [STANDARD REDUCED_REDUNDANCY STANDARD_IA ONEZONE_IA INTELLIGENT_TIERING GLACIER DEEP_ARCHIVE OUTPOSTS] [STANDARD REDUCED_REDUNDANCY STANDARD_IA ONEZONE_IA INTELLIGENT_TIERING GLACIER DEEP_ARCHIVE OUTPOSTS] ``` The one-liner is better because it is more concise, but I have to reference one of the storage classes, which doesn't have a particular meaning, so it feels wrong. Which one should I use and why? I wish they had simply kept the calling convention from v1. The `Values()` function in v2 doesn't use the type sent to it.
1
answers
0
votes
39
views
asked 6 months ago
  • 1
  • 90 / page