By using AWS re:Post, you agree to the Terms of Use
/Developer Tools/Questions/
Questions in Developer Tools
Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Invalid security token error when executing nested step function on Step Functions Local

Are nested step functions supported on AWS Step Functions Local? I am trying to create 2 step functions, where the outer one executes the inner one. However, when trying to execute the outer step function, getting an error: "The security token included in the request is invalid". To reproduce, use the latest `amazon/aws-stepfunctions-local:1.10.1` Docker image. Launch the container with the following command: ```sh docker run -p 8083:8083 -e AWS_DEFAULT_REGION=us-east-1 -e AWS_ACCESS_KEY_ID=TESTID -e AWS_SECRET_ACCESS_KEY=TESTKEY amazon/aws-stepfunctions-local ``` Then create a simple HelloWorld _inner_ step function in the Step Functions Local container: ```sh aws stepfunctions --endpoint-url http://localhost:8083 create-state-machine --definition "{\ \"Comment\": \"A Hello World example of the Amazon States Language using a Pass state\",\ \"StartAt\": \"HelloWorld\",\ \"States\": {\ \"HelloWorld\": {\ \"Type\": \"Pass\",\ \"End\": true\ }\ }}" --name "HelloWorld" --role-arn "arn:aws:iam::012345678901:role/DummyRole" ``` Then add a simple _outer_ step function that executes the HelloWorld one: ```sh aws stepfunctions --endpoint-url http://localhost:8083 create-state-machine --definition "{\ \"Comment\": \"OuterTestComment\",\ \"StartAt\": \"InnerInvoke\",\ \"States\": {\ \"InnerInvoke\": {\ \"Type\": \"Task\",\ \"Resource\": \"arn:aws:states:::states:startExecution\",\ \"Parameters\": {\ \"StateMachineArn\": \"arn:aws:states:us-east-1:123456789012:stateMachine:HelloWorld\"\ },\ \"End\": true\ }\ }}" --name "HelloWorldOuter" --role-arn "arn:aws:iam::012345678901:role/DummyRole" ``` Finally, start execution of the outer Step Function: ```sh aws stepfunctions --endpoint-url http://localhost:8083 start-execution --state-machine-arn arn:aws:states:us-east-1:123456789012:stateMachine:HelloWorldOuter ``` The execution fails with the _The security token included in the request is invalid_ error in the logs: ``` arn:aws:states:us-east-1:123456789012:execution:HelloWorldOuter:b9627a1f-55ed-41a6-9702-43ffe1cacc2c : {"Type":"TaskSubmitFailed","PreviousEventId":4,"TaskSubmitFailedEventDetails":{"ResourceType":"states","Resource":"startExecution","Error":"StepFunctions.AWSStepFunctionsException","Cause":"The security token included in the request is invalid. (Service: AWSStepFunctions; Status Code: 400; Error Code: UnrecognizedClientException; Request ID: ad8a51c0-b8bf-42a0-a78d-a24fea0b7823; Proxy: null)"}} ``` Am I doing something wrong? Is any additional configuration necessary?
0
answers
0
votes
10
views
asked 4 days ago

CodeCommit Git Windows fatal: Failed to write item to store [0x6c6]

Is there a solution for the *fatal* message *0x6c6* that shows up in git-bash for Windows? It's annoying since it appears that operations continue normally other than the "fatal" part. My coworkers using Windows experience the same problem. I've included the full error along with the *GIT_TRACE=1* info. 09:45:39.933420 run-command.c:654 trace: run_command: 'git credential-manager-core store' 09:45:40.042896 exec-cmd.c:237 trace: resolved executable dir: C:/Users/xxxxxxxx/AppData/Local/Programs/Git/mingw64/libexec/git-core 09:45:40.042896 git.c:748 trace: exec: git-credential-manager-core store 09:45:40.042896 run-command.c:654 trace: run_command: git-credential-manager-core store fatal: Failed to write item to store. [0x6c6] fatal: The array bounds are invalid This is a newly setup Win10 Pro system. I'm using the following: git 2.36.1, Python 3.10.4, git-remote-codecommit 1.16, and we use a non AWS identity provider for SSO. $ aws --version aws-cli/2.6.3 Python/3.9.11 Windows/10 exe/AMD64 prompt/off Here's ~/.gitconfig on the affected system. [credential "url pointing to aws codecommit"] provider = generic [protocol "codecommit"] allow = always Here's part of the repo .git/config [core] repositoryformatversion = 0 filemode = false bare = false logallrefupdates = true symlinks = false ignorecase = true [submodule] active = . [remote "origin"] url = codecommit::region://repo-name fetch = +refs/heads/*:refs/remotes/origin/* [branch "master"] remote = origin merge = refs/heads/master Linux systems don't have this problem.
1
answers
0
votes
24
views
asked 5 days ago

Webdriver testcases are failing while setting connection

Trying to deploy a basic webdriver.io + nodejs test on devicefarm but always for ios device test cases are getting stuck at Testcase is failing: job arn: arn:aws:devicefarm:us-west-2:612756076442:job:02bd6c95-640d-43b3-82eb-6f618777ac73/1a6364f3-7528-44b1-afa1-d6c2dc51d881/00000 ``` 2022-05-04T22:40:58.353Z ERROR @wdio/runner: Error: Failed to create session. [0-0] Unable to connect to "http://localhost:4723/", make sure browser driver is running on that address. [0-0] If you use services like chromedriver see initialiseServices logs above or in wdio.log file as the service might had problems to start the driver. [0-0] at startWebDriverSession (/private/tmp/scratchY4h2F6.scratch/test-packagex5aknf/node_modules/integration/node_modules/webdriver/build/utils.js:72:15) [0-0] at process.processTicksAndRejections (node:internal/process/task_queues:95:5) [0-0] at async Function.newSession (/private/tmp/scratchY4h2F6.scratch/test-packagex5aknf/node_modules/integration/node_modules/webdriver/build/index.js:46:45) [0-0] at async remote (/private/tmp/scratchY4h2F6.scratch/test-packagex5aknf/node_modules/integration/node_modules/webdriverio/build/index.js:77:22) [0-0] at async Runner._startSession (/private/tmp/scratchY4h2F6.scratch/test-packagex5aknf/node_modules/integration/node_modules/@wdio/runner/build/index.js:223:56) ``` and for android testcases is asking for node version greater than 12 (after adding nvm install 18.1.0 in yaml)
0
answers
0
votes
1
views
asked 17 days ago

Cloud9 - EXTREMELY High CPU Usage

Lately Cloud9 has been using an insane amount of CPU, to the point nothing will load. The node process (vfs-worker) will randomly start using 100-200% CPU and not stop until I killall node or reboot. This is on a Fresh Ubuntu install with a fresh Cloud9 install. I've switched servers 3 times, started fresh every time and still having the issue. Currently I'm on a 6 core VPS with 16gb of RAM and NVMe drives with nothing else running - the machine has no speed issues until I pull up Cloud9 and then this happens. top output currently (this is typical of the issue): ``` top - 05:25:53 up 8 min, 0 users, load average: 1.58, 1.50, 0.75 Tasks: 140 total, 3 running, 137 sleeping, 0 stopped, 0 zombie %Cpu(s): 3.6 us, 6.8 sy, 0.0 ni, 68.5 id, 0.0 wa, 0.0 hi, 0.1 si, 21.1 st MiB Mem : 16008.7 total, 14915.5 free, 949.0 used, 144.2 buff/cache MiB Swap: 0.0 total, 0.0 free, 0.0 used. 14816.9 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 591 root 20 0 1682708 839164 31552 R 178.0 5.1 8:07.95 node 1111 root 20 0 690472 53432 31496 S 7.7 0.3 0:09.81 node 548 root 20 0 13808 9044 7476 S 6.7 0.1 0:09.18 sshd 1147 root 20 0 11848 3756 3256 R 0.7 0.0 0:00.88 top 11 root 20 0 0 0 0 R 0.3 0.0 0:10.31 rcu_sched 112 root 20 0 0 0 0 I 0.3 0.0 0:01.13 kworker/u12:1-events_power_efficient 1 root 20 0 166912 10904 8388 S 0.0 0.1 0:04.07 systemd 2 root 20 0 0 0 0 S 0.0 0.0 0:00.15 kthreadd 3 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 rcu_gp 4 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 rcu_par_gp 6 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 kworker/0:0H-events_highpri 7 root 20 0 0 0 0 I 0.0 0.0 0:00.03 kworker/0:1-events 8 root 20 0 0 0 0 I 0.0 0.0 0:00.99 kworker/u12:0-events_unbound 9 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 mm_percpu_wq 10 root 20 0 0 0 0 S 0.0 0.0 0:00.07 ksoftirqd/0 12 root rt 0 0 0 0 S 0.0 0.0 0:00.15 migration/0 13 root -51 0 0 0 0 S 0.0 0.0 0:00.00 idle_inject/0 ``` This is the command hogging CPU: ``` root 591 120 3.4 1443908 567652 ? Rl 05:17 15:42 vfs-worker {"pingInterval":5000,"nodePath":"/root/.c9/node_modules","tmuxBin":"/root/.c9/bin/tmux","root":"/","debug":false,"connectionTimeout":60000,"metapath":"/.c9/metadata","envmetapath":"/.c9/metadata/environment","projectDir":"/","defaultEnv":{"HGUSER":"root","EDITOR":"","PORT":"8080","C9_PORT":"8080","IP":"127.0.0.1","C9_HOSTNAME":"154.12.230.222","C9_USER":"root","C9_PROJECT":"Box","C9_PID":"bfcfdc59308b461990aeb16d918ce011"},"useVfsLoaderCache":false,"environmentId":"bfcfdc59308b461990aeb16d918ce011","bytesPerSecond":3145728,"extendApi":{"collab":{"file":"c9.ide.collab/server/collab-server.js","user":{"userId":"301735008956","name":"root"},"project":{"workspaceId":"bfcfdc59308b461990aeb16d918ce011","name":"Box","type":"ssh"},"environment":{"id":"bfcfdc59308b461990aeb16d918ce011","name":"Box","type":"ssh","ideTemplateName":"Cloud9 Amazon Linux"},"readonly":false,"session":{},"nodePath":"/root/.c9/node_modules"},"ping":{"file":"c9.vfs.client/ping-service.js"},"store":{"file":"c9.vfs.client/store-service.js"}}} ```
0
answers
0
votes
7
views
asked 20 days ago

CodeBuild - Extremely long build times/caching

My project is a large JS/Yarn monorepo (using workspaces) that is contributed to dozens of times a day, by multiple devs. Every time a PR is opened, we build, lint, and test the project, which requires running `yarn install` before anything else. Once a merge occurs to the main branch, any open PRs must merge with the new main branch, and the PR checker needs to be run again before _that_ branch is merged. As you can imagine, there can be a pretty large backlog of open PRs when we're in crunch time. I have successfully used (both s3 & local) caching before in smaller projects, however I can't seem to get **local** caching working with our monorepo (this is a large project, so s3 is much too slow for us, as far as I can tell). When we try using local caching, our build fails with `"EEXIST: file already exists, mkdir '/codebuild/output/src11111111/src/github.com/OUR_ORG/OUR_PROJECT/node_modules/@OUR_PACKAGE/common'".`. This behavior is documented across the web: - https://github.com/aws-samples/aws-codebuild-samples/issues/8 - https://stackoverflow.com/questions/55890275/aws-codebuild-does-not-work-with-yarn-workspaces Some of the remedies include using s3, but as mentioned this is a large project and we update packages at least once every two weeks (some weeks , multiple times), and our initial upload more than doubles our time to 40min. Downloading doesn't save us much time either. - Are there any reasonable steps we can take to cache our node_modules directories so we don't have to pull from yarn every time? - Are there any other solutions to speed up these build times (we're currently ~14min. After optimizing other parts of the build process)? - Do you have any example (large) mono repos you can point to as a template? - Any other tips to speed up our build times?
1
answers
1
votes
9
views
asked 23 days ago

Trouble in node.js sending data from html form to the server.

I have a question about AWS Cloud9 and nodejs. I am using this nodejs tutorial to teach myself how to communicate between client side and server side: https://dev.to/gbudjeakp/how-to-connect-your-client-side-to-your-server-side-using-node-and-express-2i71 I followed the tutorial and set it up on my laptop and everything worked fine. I even was able to figure out how to write the data sent from the client to a file using writeFileSync. The problem is in trying to get it to work while on the Amazon Web Service Cloud9 integrated development environment. When I run the index.js file by itself on Cloud9, I get the message stating that the server is running on port 8080, as expected. However, I cannot get the HTML file to interface with the index.js file. The HTML file contains a form that is supposed to send the form content to index.js. I believe the problem is in the address contained in the HTML form. I have researched and tried several different configurations, but none of them work. Currently, the form looks like this: <form action="https://localhost:8080/login" method="POST"> <input type="text" name="username" placeholder="username"> <input type="text" name="password" placeholder="password"> <button type="submit">Login</button> </form> Using localhost:8080 results in an error message of "localhost refused to connect." When I have used variations of my username and AWS project name, I receive an error stating that the server IP address could not be found. Any help would be appreciated.
1
answers
0
votes
2
views
asked 24 days ago

Upgrade LZ to v2.4.4 - Java 1.8.0 error

Trying to upgrade LZ to V2.4.4 but getting stuck in the first step of the codebuild (container configuration). It seems that is using an OS image that is crashing due packages during the process. At the moment apt-get upgrade is trying to install a newer version of java JDK, which's requiring a manual interaction not mapped (potentially causing the error bellow): ``` Setting up java-1.8.0-amazon-corretto-jdk:amd64 (1:8.332.08-1) ... Configuration file '/usr/lib/jvm/java-1.8.0-amazon-corretto/jre/lib/security/policy/limited/local_policy.jar' ==> File on system created by you or by a script. ==> File also in package provided by package maintainer. What would you like to do about it ? Your options are: Y or I : install the package maintainer's version N or O : keep your currently-installed version D : show the differences between the versions Z : start a shell to examine the situation The default action is to keep your current version. *** local_policy.jar (Y/I/N/O/D/Z) [default=N] ? dpkg: error processing package java-1.8.0-amazon-corretto-jdk:amd64 (--configure): end of file on stdin at conffile prompt ``` That causes Google Chrome packages to fail, further down the line. ``` Errors were encountered while processing: java-1.8.0-amazon-corretto-jdk:amd64 W: Target Packages (main/binary-amd64/Packages) is configured multiple times in /etc/apt/sources.list.d/google-chrome.list:3 and /etc/apt/sources.list.d/google.list:1 W: Target Packages (main/binary-all/Packages) is configured multiple times in /etc/apt/sources.list.d/google-chrome.list:3 and /etc/apt/sources.list.d/google.list:1 E: Sub-process /usr/bin/dpkg returned an error code (1) ``` First of all, not sure why AWS still using apt-get instead of APT to manage packages and further more using Chrome packages to containers executions. Anyway, have anyone faced and fixed this? Thanks.
0
answers
0
votes
2
views
asked 24 days ago

Emergency Floating Point Logic Error Repairs?

Dear Corretto, We have been encountering brickwall problems trying to interact with Oracle, the JCP and the OpenJDK in terms of Java floating point, since they are refusing to interact with us, discuss, or be persuaded. This leaves no options at all, in a necessary and obligate sitation. From what we have gathered, both ourselves and more widely, IEEE 754 has a blind spot in it. An incompletion where either right or wrong can creep in. This is towards the right hand side of float or double arithmetic and StrictMath method calls, where you can have a straddling value for the last decimal place. At that point there is presently room for confusion between the decimal and the binary; the decimal, which the human and further logic, mathematics or software needs, and the binary, which is converterted to and away from, that the computer needs to perform operations on. Since binary is for computers, and decimal is for humans, the most important root fact anyway, dealing with both of them means that you should deal with them one at a time, or convert them, entirely, one to the other. This is exactly what Java floating point does not do. It deliberately confuses the two, at the wrong time, with denormals and pronormals, at the last unit place in float and double decimal numbers, leading to what is accurately referred to as a 'floating point errors', even though no Java exception objects are thrown at the time. When the Java switch statement was enhanced, so that programmers could immediately switch via a String, and could also coelesce switch options using the -> operator, there was no split in Java because of an incompatability. The fact that people had cause to learn something different and new about switch was no problem either. The two compatability options, one of which was a technical enhancement, only improved the circumstances for everyone; the programmers, users and vendors. The BigInteger, BigDecimal, https://github.com/eobermuhlner big-math workarounds waste too much memory too much speed, both. The superior approach for floating point types, aside from problems with Arbitrary Precision on their own, is to just plain correct floating point errors by means of SSE hardware in the CPU floating point unit, the Maths Co-Processor, which almost all PCS have these days, at the point of 2022; in fact they have successors past SSE, since SSE itself is all the way up to version 4.2. The thing with a patch is that it is not the main stream, the primary product. People have the specific option to include one or not. Who could want or need a denormal or pronormal value, exactedly? Any speed difference between accurate or innacurate is negligible because of SSE and similar anyway. Why should all ubiquity Java Developers have no broader choice about how they react about an incomplete, therefore incorrect and flawed standard, and implementation? Present workarounds are only that, workarounds; while within float and double ranges, these workarounds are even slower and larger in RAM than needs be. A patch can leave the writing or reading or exchange of float and double between Java code spaces exactly the same, or enhanced, providing complete choice between two Java floating point operation modes. Most importantly, what should happen about floating point error correction should Oracle, OpenJDK, JCP et al, the ideal points for improvement on FP errors, never listen, and persist in leaving these errors and program operations problem intact? Surely it would be better for a downstream vendor to offer a patch for the problem, which is lower, even really no risk and apart from their mainstream OpenJDK and Open JRE, than this error problem remain ongoing and neglected in place, with its present crazy consequences, forever. If these are the needs and the circumstances, Corretto should still consider a special Floating Point patch for Corretto Java, despites everything so far?
0
answers
0
votes
3
views
asked a month ago
1
answers
0
votes
9
views
asked a month ago

CodeBuild failing with invalidParameterError on build with a valid parameter given

I'm trying to create a lambda layer in serverless and have it deploy to AWS creating the lambda layer for use in other deployments. However I'm running into an issue where the "Lambda:PublishLayerVersion" is failing because of CompatibleArchitectures. I'm wondering if its possible that there was a mistake that I'm missing or its serverless having an issue because Action is using a lowercase 'p' for "Lambda:publishLayerVersion" when the docs here: https://docs.aws.amazon.com/lambda/latest/dg/API_PublishLayerVersion.html states it is "Lambda:PublishLayerVersion". It is also likely that the SDK error is legitimate that the param "CompatibleArchitectures" isn't supported in "us-west-1" but I have a hard time finding docs to tell me what is supported in different regions. serverless.yml Spec: ``` provider: name: aws runtime: python3.8 lambdaHashingVersion: 20201221 region: us-west-1 stage: ${opt:stage, 'stage'} deploymentBucket: name: name.serverless.${self:provider.region}.deploys deploymentPrefix: serverless iamRoleStatements: - Effect: Allow Action: - s3:PutObject - s3:GetObject Resource: "arn:aws:s3:::name.serverless.${self:provider.region}/*" - Effect: Allow Action: - cloudformation:DescribeStacks Resource: "*" - Effect: Allow Action: - lambda:PublishLayerVersion Resource: "*" layers: aws-abstraction-services-layer: # name: aws-abstraction-services-layer path: aws-abstraction-layer description: "This is the goal of uploading our abstractions to a layer to upload and use to save storage in deployment packages" compatibleRuntimes: - python3.8 allowedAccounts: - '*' plugins: - serverless-layers - serverless-python-requirements ``` Output of build log ``` [Container] 2022/04/12 17:14:41 Running command serverless deploy Running "serverless" from node_modules Deploying aws-services-layer to stage stage (us-west-1) [ LayersPlugin ]: => default ... ○ Downloading requirements.txt from bucket... ... ○ requirements.txt The specified key does not exist.. ... ○ Changes identified ! Re-installing... ... ○ pip install -r requirements.txt -t . ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. aws-sam-cli 1.40.1 requires requests==2.25.1, but you have requests 2.27.1 which is incompatible. WARNING: Running pip as root will break packages and permissions. You should install packages reliably by using venv: https://pip.pypa.io/warnings/venv WARNING: You are using pip version 21.1.2; however, version 22.0.4 is available. You should consider upgrading via the '/root/.pyenv/versions/3.8.10/bin/python3.8 -m pip install --upgrade pip' command. Collecting requests Downloading requests-2.27.1-py2.py3-none-any.whl (63 kB) Collecting charset-normalizer~=2.0.0 Downloading charset_normalizer-2.0.12-py3-none-any.whl (39 kB) Collecting certifi>=2017.4.17 Downloading certifi-2021.10.8-py2.py3-none-any.whl (149 kB) Collecting idna<4,>=2.5 Downloading idna-3.3-py3-none-any.whl (61 kB) Collecting urllib3<1.27,>=1.21.1 Downloading urllib3-1.26.9-py2.py3-none-any.whl (138 kB) Installing collected packages: urllib3, idna, charset-normalizer, certifi, requests Successfully installed certifi-2021.10.8 charset-normalizer-2.0.12 idna-3.3 requests-2.27.1 urllib3-1.26.9 ... ○ Created layer package /codebuild/output/src847310000/src/.serverless/aws-services-layer-stage-python-default.zip (0.8 MB) ... ○ Uploading layer package... ... ○ OK... ServerlessLayers error: Action: Lambda:publishLayerVersion Params: {"Content":{"S3Bucket":"name.serverless.us-west-1.deploys","S3Key":"serverless/aws-services-layer/stage/layers/aws-services-layer-stage-python-default.zip"},"LayerName":"aws-services-layer-stage-python-default","Description":"created by serverless-layers plugin","CompatibleRuntimes":["python3.8"],"CompatibleArchitectures":["x86_64","arm64"]} AWS SDK error: CompatibleArchitectures are not supported in us-west-1. Please remove the CompatibleArchitectures value from your request and try again [Container] 2022/04/12 17:14:47 Command did not exit successfully serverless deploy exit status 1 [Container] 2022/04/12 17:14:47 Phase complete: BUILD State: FAILED [Container] 2022/04/12 17:14:47 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: serverless deploy. Reason: exit status 1 [Container] 2022/04/12 17:14:47 Entering phase POST_BUILD [Container] 2022/04/12 17:14:47 Phase complete: POST_BUILD State: SUCCEEDED [Container] 2022/04/12 17:14:47 Phase context status code: Message: ```
1
answers
1
votes
8
views
asked a month ago
0
answers
0
votes
3
views
asked a month ago

code deploy agent duplicate log entries

I'm experiencing an issue with a deployment to one of several Windows servers - other servers with the same job are deploying correctly. The issue is that upon extraction of a zip file the following error occurs: Exception calling "ExtractToDirectory" with "2" argument(s): "The file 'C:\Prog....c.yml' already exists." (path shortened for brevity) After the deployment failed, I looked in the directory and sure enough the file was there and it was written by the code deployment job. I noticed something odd though, in the code deploy agent logs there are two sets of every entry, including the above error. It appears that code deploy is running two instances and they are both trying to perform the same deployment and therefore can't write to the same file. Here is a snippet from the agent log: 2022-04-03T00:01:51 DEBUG [codedeploy-agent(13484)]: InstanceAgent::Plugins::CodeDeployPlugin::CommandPoller: PollHostCommand: Host Command = nil 2022-04-03T00:01:51 DEBUG [codedeploy-agent(13484)]: InstanceAgent::Plugins::CodeDeployPlugin::CommandPoller: PollHostCommand: Host Command = nil 2022-04-03T00:01:52 DEBUG [codedeploy-agent(13484)]: InstanceAgent::Plugins::CodeDeployPlugin::CommandPoller: Calling PollHostCommand: 2022-04-03T00:01:52 DEBUG [codedeploy-agent(13484)]: InstanceAgent::Plugins::CodeDeployPlugin::CommandPoller: Calling PollHostCommand: 2022-04-03T00:01:52 INFO [codedeploy-agent(13484)]: Version file found in C:/ProgramData/Amazon/CodeDeploy/.version with agent version OFFICIAL_1.3.2.1902_msi. 2022-04-03T00:01:52 INFO [codedeploy-agent(13484)]: Version file found in C:/ProgramData/Amazon/CodeDeploy/.version with agent version OFFICIAL_1.3.2.1902_msi. On servers where the deployment succeeds, I only see one line for each of the above in the log files. I have restarted the agent service to no avail as well as reinstalled. Any help is appreciated.
0
answers
0
votes
1
views
asked a month ago

Java Floating Point Correction Patch in Corretto in a Compatible Manner.

Dear Corretto, This email is in regards to [https://github.com/corretto/corretto-18/issues/15](https://github.com/corretto/corretto-18/issues/15), which is closed at this time. I/we are not in a position to just leave this particular subject where last left, and we don't want the discussion or apprehension of this subject to remain closed. As we tried to describe on the beginning of our thread at [https://github.com/corretto/corretto-18/issues/15](https://github.com/corretto/corretto-18/issues/15) , Java floating point denormal and pronormal values, from arithmetic and StrictMath function calls, in terms of range accuracy, don't correspond to decimal or binary mathematics. Because they are not one or the other, they don't correspond to anything. IEEE 754, for these reasons, is incomplete, and trying to justify Java's present state due to IEEE 754 is a non sequitar, as is, more importantly, the present floating point state of OpenJDK, and Corretto, at least right now. -If just to start, how could correcting Java's default floating point behaviour possibly break compatibility? What on earth could correcting the present, strange, erroneous and inconsistent behaviour of Java floating point as it is now, possibly break compatibility with? We can't really see or think of one example, certainly one that is good for Java, and not a lower level language. Forgive our naïveté, but are there really any such real useful examples, such, at all, pertinent to a Java space, needing floating point errors without their base 10 range correction?!?! -Even this is not the main thrust of our request. Our request submission to Coretto still is for the sake of the implementation and release of a Coretto Java patch to itself. A patch can be included or omitted, installed or not, still allowing compatibility. But even with the inclusion of a patch, various switches or options for the runtime could be involved, to enable a changed floating point arithmetic or StrictMath, yet still allowing compatability, either totally, or in some desired partial or co-integral manner, which could still succeed in being previously compatible, if one pauses to think. :) We can't exactly allow for this discussion to be just halted, because we in fact are beginning to NEED corrected OpenJDK floating point arithmetic, and an equivalent corrected StrictMath, because of the 2D graphics and 3D visual graphics work we are now planning. The present workarounds are too slow, and waste too much memory. We need continuous float and double range accuracy and the other facilities of those earlier types. As will a large number of programmers or companies out there, who haven't come forward, or persisted. Can those involved at Corretto reconsider this matter, and implement and release a base 10 floating point arithmetic and StrictMath correction or mode varying patch for Corretto Java, for present and future versions, for its JDK and JRE, on all offered Correto platforms, now and into the future?
1
answers
0
votes
4
views
asked 2 months ago
  • 1
  • 90 / page