By using AWS re:Post, you agree to the Terms of Use

Questions tagged with IPv6

Sort by most recent
  • 1
  • 2
  • 12 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

AWS lambda function not able to resolve or connect to ipv6 only domain

I implemented a AWS lambda function which shall pass an Alexa custom skill event to my domain for processing, see code below. const https = require('https'); exports.handler = (event, context, callback) => { var options = { hostname: '<my.domain.com>', path: '/<mypath>', port: 443, method: 'POST', rejectUnauthorized: false, headers: { 'Content-Type': 'application/json', 'Authorization': '<my base64 user:password>' } }; const req = https.request(options, (res) => { let body = ''; console.log('Status:', res.statusCode); console.log('Headers:', JSON.stringify(res.headers)); res.setEncoding('utf8'); res.on('data', (chunk) => { body += chunk; }); res.on('end', () => { console.log('Successfully processed HTTPS response'); body = JSON.parse(body); callback(null, body); }); }); req.on('error', callback); req.write(JSON.stringify(event)); req.end(); }; The function runs serverless, not connected to a VPC. The domain <my.domain.com> resolves to an IPv6 address and I am able to connect to my host for example from an internet instance using curl and receive the expected answers. curl -i -k -v -X POST -d testcase.json -u <user:password> https://<my.domain.com>:<my port>/<my path> In AWS I implemented a test case and run it. The test returned the error ENOTFOUND from function getaddrinfo trying to resolve my domain, see execution result below. Test Event Name Test0001 Response { "errorType": "Error", "errorMessage": "getaddrinfo ENOTFOUND <my.domain.com>", "trace": [ "Error: getaddrinfo ENOTFOUND <my.domain.com>", " at GetAddrInfoReqWrap.onlookup [as oncomplete] (node:dns:71:26)" ] } Function Logs LOGS Name: cloudwatch_lambda_agent State: Subscribed Types: [platform] EXTENSION Name: cloudwatch_lambda_agent State: Ready Events: [SHUTDOWN,INVOKE] START RequestId: 78314f37-e991-4d3d-b4f2-03da64bf91b7 Version: $LATEST 2022-09-24T04:59:06.966Z 78314f37-e991-4d3d-b4f2-03da64bf91b7 ERROR Invoke Error {"errorType":"Error","errorMessage":"getaddrinfo ENOTFOUND <my.domain.com>","code":"ENOTFOUND","errno":-3008,"syscall":"getaddrinfo","hostname":"<my.domain.com>","stack":["Error: getaddrinfo ENOTFOUND <my.domain.com>"," at GetAddrInfoReqWrap.onlookup [as oncomplete] (node:dns:71:26)"]} END RequestId: 78314f37-e991-4d3d-b4f2-03da64bf91b7 REPORT RequestId: 78314f37-e991-4d3d-b4f2-03da64bf91b7 Duration: 425.43 ms Billed Duration: 426 ms Memory Size: 128 MB Max Memory Used: 76 MB Init Duration: 248.14 ms During my investigation I found the hint to add option „family: 6,“. Using this option the test case resolves the domain now to the correct ipv6 address, but returns then EAFNOSUPPORT trying to connect to the address, see execution result below. Request ID 78314f37-e991-4d3d-b4f2-03da64bf91b7 Test Event Name Test0001 Response { "errorType": "Error", "errorMessage": "connect EAFNOSUPPORT xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:443 - Local (undefined:undefined)", "trace": [ "Error: connect EAFNOSUPPORT xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:443 - Local (undefined:undefined)", " at internalConnect (node:net:953:16)", " at defaultTriggerAsyncIdScope (node:internal/async_hooks:465:18)", " at GetAddrInfoReqWrap.emitLookup [as callback] (node:net:1097:9)", " at GetAddrInfoReqWrap.onlookup [as oncomplete] (node:dns:73:8)" ] } Function Logs LOGS Name: cloudwatch_lambda_agent State: Subscribed Types: [platform] EXTENSION Name: cloudwatch_lambda_agent State: Ready Events: [INVOKE,SHUTDOWN] START RequestId: f3493148-071f-466d-94c7-d29a0d715640 Version: $LATEST 2022-09-24T05:06:52.877Z f3493148-071f-466d-94c7-d29a0d715640 ERROR Invoke Error {"errorType":"Error","errorMessage":"connect EAFNOSUPPORT xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:443 - Local (undefined:undefined)","code":"EAFNOSUPPORT","errno":-97,"syscall":"connect","address":"xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx","port":443,"stack":["Error: connect EAFNOSUPPORT xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:443 - Local (undefined:undefined)"," at internalConnect (node:net:953:16)"," at defaultTriggerAsyncIdScope (node:internal/async_hooks:465:18)"," at GetAddrInfoReqWrap.emitLookup [as callback] (node:net:1097:9)"," at GetAddrInfoReqWrap.onlookup [as oncomplete] (node:dns:73:8)"]} END RequestId: f3493148-071f-466d-94c7-d29a0d715640 REPORT RequestId: f3493148-071f-466d-94c7-d29a0d715640 Duration: 447.45 ms Billed Duration: 448 ms Memory Size: 128 MB Max Memory Used: 76 MB Init Duration: 231.52 ms Request ID f3493148-071f-466d-94c7-d29a0d715640 Any further investigation was not successful. I assume it is an issue using IPv6, but I am not able to solve it. Any help is appreciated. Thank you in advance. Joachim
2
answers
0
votes
17
views
asked 2 days ago

EMR Serverless IPV6 connectivity issue in private subnet VPC

Hi, I have just been fiddling in EMR Serverless recently from this week after GA Release. I found that I am not able to download my AWS S3 jar files if I try to run EMR Serverless job from private VPC subnet. I have already tested the connectivity from EC2 using same subnet and same security group but the problem exist only in EMR Serverless Job. From the error logs I can see it is trying to connect to Spark IPV6 address which I am not sure why it is not connecting. Region: ap-northeast-1 Subnet: ap-northeast-1a (I have tried other subnets too) Here are my tail logs: ``` 22/06/03 19:08:42 INFO SparkContext: Added JAR file:/tmp/spark-cdc6b2aa-657b-4464-b7d8-3cbe2fea3872/xbean-asm9-shaded-4.20.jar at spark://[2406:da14:5a:5a01:41a7:4134:a18b:f5f8]:42539/jars/xbean-asm9-shaded-4.20.jar with timestamp 1654283322058 22/06/03 19:08:42 INFO SparkContext: Added JAR file:/tmp/spark-cdc6b2aa-657b-4464-b7d8-3cbe2fea3872/xz-1.8.jar at spark://[2406:da14:5a:5a01:41a7:4134:a18b:f5f8]:42539/jars/xz-1.8.jar with timestamp 1654283322058 22/06/03 19:08:42 INFO SparkContext: Added JAR file:/tmp/spark-cdc6b2aa-657b-4464-b7d8-3cbe2fea3872/zookeeper-3.6.2.jar at spark://[2406:da14:5a:5a01:41a7:4134:a18b:f5f8]:42539/jars/zookeeper-3.6.2.jar with timestamp 1654283322058 22/06/03 19:08:42 INFO SparkContext: Added JAR file:/tmp/spark-cdc6b2aa-657b-4464-b7d8-3cbe2fea3872/zookeeper-jute-3.6.2.jar at spark://[2406:da14:5a:5a01:41a7:4134:a18b:f5f8]:42539/jars/zookeeper-jute-3.6.2.jar with timestamp 1654283322058 22/06/03 19:08:42 INFO SparkContext: Added JAR file:/tmp/spark-cdc6b2aa-657b-4464-b7d8-3cbe2fea3872/zstd-jni-1.5.0-4.jar at spark://[2406:da14:5a:5a01:41a7:4134:a18b:f5f8]:42539/jars/zstd-jni-1.5.0-4.jar with timestamp 1654283322058 22/06/03 19:08:42 INFO SparkContext: Added JAR s3://datalake-cbts-test/spark-jobs/app.jar at s3://datalake-cbts-test/spark-jobs/app.jar with timestamp 1654283322058 22/06/03 19:08:42 INFO Executor: Starting executor ID driver on host ip-10-0-148-61.ap-northeast-1.compute.internal 22/06/03 19:08:42 INFO Executor: Fetching spark://[2406:da14:5a:5a01:41a7:4134:a18b:f5f8]:42539/jars/protobuf-java-2.5.0.jar with timestamp 1654283322058 22/06/03 19:08:43 ERROR Utils: Aborting task java.io.IOException: Failed to connect to /2406:da14:5a:5a01:41a7:4134:a18b:f5f8:42539 at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:288) at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:218) at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:230) at org.apache.spark.rpc.netty.NettyRpcEnv.downloadClient(NettyRpcEnv.scala:399) at org.apache.spark.rpc.netty.NettyRpcEnv.$anonfun$openChannel$4(NettyRpcEnv.scala:367) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1508) at org.apache.spark.rpc.netty.NettyRpcEnv.openChannel(NettyRpcEnv.scala:366) at org.apache.spark.util.Utils$.doFetchFile(Utils.scala:763) at org.apache.spark.util.Utils$.fetchFile(Utils.scala:550) at org.apache.spark.executor.Executor.$anonfun$updateDependencies$13(Executor.scala:962) at org.apache.spark.executor.Executor.$anonfun$updateDependencies$13$adapted(Executor.scala:954) at scala.collection.TraversableLike$WithFilter.$anonfun$foreach$1(TraversableLike.scala:985) at scala.collection.mutable.HashMap.$anonfun$foreach$1(HashMap.scala:149) at scala.collection.mutable.HashTable.foreachEntry(HashTable.scala:237) at scala.collection.mutable.HashTable.foreachEntry$(HashTable.scala:230) at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:44) at scala.collection.mutable.HashMap.foreach(HashMap.scala:149) at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:984) at org.apache.spark.executor.Executor.org$apache$spark$executor$Executor$$updateDependencies(Executor.scala:954) at org.apache.spark.executor.Executor.<init>(Executor.scala:247) at org.apache.spark.scheduler.local.LocalEndpoint.<init>(LocalSchedulerBackend.scala:64) at org.apache.spark.scheduler.local.LocalSchedulerBackend.start(LocalSchedulerBackend.scala:132) at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:220) at org.apache.spark.SparkContext.<init>(SparkContext.scala:582) at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2694) at org.apache.spark.sql.SparkSession$Builder.$anonfun$getOrCreate$2(SparkSession.scala:949) at scala.Option.getOrElse(Option.scala:189) at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:943) at np.com.ngopal.spark.SparkJob.getSession(SparkJob.java:74) at np.com.ngopal.spark.SparkJob.main(SparkJob.java:111) ``` Thanks
1
answers
0
votes
139
views
asked 4 months ago
1
answers
0
votes
56
views
asked 4 months ago
  • 1
  • 2
  • 12 / page