Java Development

Download the tools needed to to run Java applications on AWS: SDK for Java, AWS IDE Toolkits, AWS CDK for Java, and Amazon Corretto.

Recent questions

see all
1/18
  • My lambda takes a long time to do the first operation with an aws client. For example I am performing a query on the index through the dynamoDB client and the first execution takes 2 seconds, while in subsequent executions on the same lambda environment the query is executed in 100 milliseconds. The DynamoDB client is inizialized outside of the Lambda handler method. Why my first execution takes so long?
    1
    answers
    0
    votes
    21
    views
    asked 3 days ago
  • I need to use **LookupEvents** API provied by Cloudtrial to periodically fetch events. Initially I set a **StartTime **parameter to something initially, and fetched all events. The after some time when I fetch, I specify the **StartTime **as recent event's **EventTime **from previous fetch + 1 second. Can I use this? Or will I miss any events? If so, can I get any suggestions while also avoiding duplicate events.
    1
    answers
    0
    votes
    6
    views
    asked 3 days ago
  • Java 17 is not supported by aws lambda. I need to create a lambda function using spring cloud and java 17 as base image. What dependencies i must install in my java 17 base image?
    1
    answers
    0
    votes
    26
    views
    asked 5 days ago
  • Hello, I'm using Java CDK to create a new ECR instance. Here is a code fragment: Repository.Builder.create(scope, id).imageScanOnPush(true) .repositoryName("my-registry").removalPolicy(RemovalPolicy.DESTROY).build(); Looking in the AWS Console, the name of the new created repository is "null/my-registry" instead of "my-registry". If I create the new ECR instance in AWS Console then its name is created as expected, i.e. "my-registry" and not "null/my-registry". What am I doing wrong here ? Many thanks in advance. Nicolas
    3
    answers
    0
    votes
    27
    views
    profile picture
    Nicolas
    asked 15 days ago
  • Hi team, I am using the revoke token API to revoke the refresh token and it revokes the refresh token as well I can see that I am not able to generate the new access token using that refresh token but I tried to call the revoke token API again with the same refresh token but it didn't throw any error. I am expecting it should throw an error something like refresh token has already been revoked. Here is how I am revoking access token: ``` RevokeTokenRequest revokeTokenRequest = new RevokeTokenRequest(); revokeTokenRequest.setClientId("client-id"); revokeTokenRequest.setToken("refresh_token"); revokeTokenRequest.setClientSecret("client-secret"); awsCognitoIdentityProvider.revokeToken(revokeTokenRequest); ```
    1
    answers
    0
    votes
    29
    views
    asked a month ago
  • ![Enter image description here](/media/postImages/original/IMsCMJcrx6R7-itzskH949uA) Hi Team I Deployed Spring Boot using Elastic Bean stack and done the necessary steps * server port * adding roles But application did not deploy it says degraded. could someone please help Source Code : https://github.com/andrewsselvaraj/springawt/tree/main/spring-boot-jwt
    0
    answers
    0
    votes
    18
    views
    asked a month ago
  • Team Getting 502 Bad Gateway Spring Boot Using Elastice Bean Stack. Can someone please help
    0
    answers
    0
    votes
    17
    views
    asked a month ago
  • We are using Java Flow framework for swf workflow and activities. The current workflow will execute two activities, now we will need to register a new activity, and update the workflow implementation to conditionally run another activity when the workflow input meet a certain condition, so there is no change to other two activities, no change to the workflow interface itself, but will only update the workflow implementation to invoke another activity, now my question is if we deploy the change, will the in-fly workflow execution that run on the old version failed or timed out because of the reply process. I am not sure if this falls into https://docs.aws.amazon.com/amazonswf/latest/awsflowguide/java-flow-making-changes-solutions.html#use-feature-flags, so the in-fly execution won't be impacted when deploy the changes to the workflow. Please see my below code before and after ``` // Before Change @Workflow(dataConverter = ManualOperationSwfDataConverter.class) @WorkflowRegistrationOptions(defaultExecutionStartToCloseTimeoutSeconds = MAX_WAIT_TIME_SECONDS, defaultTaskStartToCloseTimeoutSeconds = DEFAULT_TASK_START_TO_CLOSE_TIMEOUT_SECONDS) public interface MyWorkflowDefinition { @Execute(version = "1.0") void MyWorkflow(Input input); } @Override @Asynchronous public void MyWorkflow(Input input) { new TryCatch() { @Override protected void doTry() { final Promise<Input> promise = client.runActivity1(input); final Promise<Void> result2 = client.runActivity2(promise); } @Override protected void doCatch(final Throwable e) throws Throwable { handleError(e); throw e; } }; } ``` ``` // After Change @Workflow(dataConverter = ManualOperationSwfDataConverter.class) @WorkflowRegistrationOptions(defaultExecutionStartToCloseTimeoutSeconds = MAX_WAIT_TIME_SECONDS, defaultTaskStartToCloseTimeoutSeconds = DEFAULT_TASK_START_TO_CLOSE_TIMEOUT_SECONDS) public interface MyWorkflowDefinition { @Execute(version = "1.0") void MyWorkflow(Input input); } @Override @Asynchronous public void MyWorkflow(Input input) { new TryCatch() { @Override protected void doTry() { if (input.client == eligibleClient) { final Promise<Input> promise1 = client.runActivity3(input); final Promise<Input> promise2 = client.runActivity1(promise1); final Promise<Void> result2 = client.runActivity2(promise2); } else { final Promise<Input> promise = client.runActivity1(input); final Promise<Void> result2 = client.runActivity2(promise); } } @Override protected void doCatch(final Throwable e) throws Throwable { handleError(e); throw e; } }; } ```
    0
    answers
    0
    votes
    27
    views
    asked a month ago
  • This is cross posted on StackOverflow https://stackoverflow.com/questions/75389388/using-aws-java-sdk-2-0-webidentitytokenfilecredentialsprovider-gives-sdkclientex ``` I have an application that already works using Kinesis. The application uses AWS Session Credentials but we are switching to using either AWS Session Credentials or Web Identity Token (software.amazon.awssdk.auth.credentials.WebIdentityTokenFileCredentialsProvider) depending on the deployment environment. When I add in the code to use WebIdentityTokenFileCredentialsProvider I get the stacktrace below. I can't provide the code but rest assured I'm setting an HTTP client for Kinesis. But if you look at the stacktrace it shows that a default HTTP client is being configured via the Provider deep within the AWS SDK code. I have no influence over the Credentials Provider setting the HTTP client as the WebIdentityTokenFileCredentialsProvider doesn't give me a way to tell it that I don't need a default HTTP client being set. I know one option is to create my own implementation of the WebIdentityTokenFileCredentialsProvider but I'd rather not do that. Question: What else can I do to work around this? Caused by: software.amazon.awssdk.core.exception.SdkClientException: Multiple HTTP implementations were found on the classpath. To avoid non-deterministic loading implementations, please explicitly provide an HTTP client via the client builders, set the software.amazon.awssdk.http.service.impl system property with the FQCN of the HTTP service to use as the default, or remove all but one HTTP implementation from the classpath at software.amazon.awssdk.core.exception.SdkClientException$BuilderImpl.build(SdkClientException.java:102) at software.amazon.awssdk.core.internal.http.loader.ClasspathSdkHttpServiceProvider.loadService(ClasspathSdkHttpServiceProvider.java:62) at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197) at java.base/java.util.Spliterators$ArraySpliterator.tryAdvance(Spliterators.java:1002) at java.base/java.util.stream.ReferencePipeline.forEachWithCancel(ReferencePipeline.java:129) at java.base/java.util.stream.AbstractPipeline.copyIntoWithCancel(AbstractPipeline.java:527) at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:513) at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499) at java.base/java.util.stream.FindOps$FindOp.evaluateSequential(FindOps.java:150) at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) at java.base/java.util.stream.ReferencePipeline.findFirst(ReferencePipeline.java:647) at software.amazon.awssdk.core.internal.http.loader.SdkHttpServiceProviderChain.loadService(SdkHttpServiceProviderChain.java:44) at software.amazon.awssdk.core.internal.http.loader.CachingSdkHttpServiceProvider.loadService(CachingSdkHttpServiceProvider.java:46) at software.amazon.awssdk.core.internal.http.loader.DefaultSdkHttpClientBuilder.buildWithDefaults(DefaultSdkHttpClientBuilder.java:40) at software.amazon.awssdk.core.client.builder.SdkDefaultClientBuilder.lambda$resolveSyncHttpClient$7(SdkDefaultClientBuilder.java:343) at java.base/java.util.Optional.orElseGet(Optional.java:364) at software.amazon.awssdk.core.client.builder.SdkDefaultClientBuilder.resolveSyncHttpClient(SdkDefaultClientBuilder.java:343) at software.amazon.awssdk.core.client.builder.SdkDefaultClientBuilder.finalizeSyncConfiguration(SdkDefaultClientBuilder.java:282) at software.amazon.awssdk.core.client.builder.SdkDefaultClientBuilder.syncClientConfiguration(SdkDefaultClientBuilder.java:178) at software.amazon.awssdk.services.sts.DefaultStsClientBuilder.buildClient(DefaultStsClientBuilder.java:27) at software.amazon.awssdk.services.sts.DefaultStsClientBuilder.buildClient(DefaultStsClientBuilder.java:22) at software.amazon.awssdk.core.client.builder.SdkDefaultClientBuilder.build(SdkDefaultClientBuilder.java:145) at software.amazon.awssdk.services.sts.internal.StsWebIdentityCredentialsProviderFactory$StsWebIdentityCredentialsProvider.<init>(StsWebIdentityCredentialsProviderFactory.java:71) at software.amazon.awssdk.services.sts.internal.StsWebIdentityCredentialsProviderFactory$StsWebIdentityCredentialsProvider.<init>(StsWebIdentityCredentialsProviderFactory.java:55) at software.amazon.awssdk.services.sts.internal.StsWebIdentityCredentialsProviderFactory.create(StsWebIdentityCredentialsProviderFactory.java:47) at software.amazon.awssdk.auth.credentials.WebIdentityTokenFileCredentialsProvider.<init>(WebIdentityTokenFileCredentialsProvider.java:86) at software.amazon.awssdk.auth.credentials.WebIdentityTokenFileCredentialsProvider.<init>(WebIdentityTokenFileCredentialsProvider.java:46) at software.amazon.awssdk.auth.credentials.WebIdentityTokenFileCredentialsProvider$BuilderImpl.build(WebIdentityTokenFileCredentialsProvider.java:200) ```
    2
    answers
    0
    votes
    68
    views
    cwa
    asked 2 months ago
  • I'm trying to migrate from SDK for Java v1 to v2. With the `AmazonS3` client in SDK v1, PutObject returns PutObjectResult which makes the object metadata available. From the object metadata I can get the version ID of the file (along with content length and last modified timestamp). I don't see a way to get any of this information from the `S3Client` in SDK v2 from the response to the PutObject. I do not want to have to make a separate call to get the version ID. I'm really hoping someone can tell me that this data is still available in the response and that I'm just not seeing it. Any pointers would be appreciated. Thanks. Edited to add that I don't want to make a second request. I'm looking for similar behavior to v1.
    1
    answers
    0
    votes
    18
    views
    Mark
    asked 2 months ago
  • I have a AWS elastic beanstalk load balancer t2.medium environment set up recently. I noticed that there is a CPU spike event (last a few seconds), following with high latency stay at 80s forever and hanging - never auto recover until manual reboot. (see chart) ![health matrix](/media/postImages/original/IMhXZ-n213SoG4YIy9n-Axvg) The new environment is a clone of our old instance (Tomcat 8.5 with Java 8 running on 64bit Amazon Linux/3.4.18). The CPU spike might be caused by a batch job, but why latency stay high 80s after CPU usage recover? This happened twice in 2 weeks. I checked the log, no other suspicious event. The old instance (t2.small running the same code) had never behaved like this and never had latency this high number before. Can anyone give some hints?
    0
    answers
    0
    votes
    10
    views
    asked 2 months ago
  • I have a service that is used to deploy a new EC2 instance behind an ELB. The code works fine most of the time but every once in a while I get an error "Target groups 'arn:aws:elasticloadbalancing:ca-central-arn:...' not found (Service: AmazonElasticLoadBalancing; Status Code: 400; Error Code: TargetGroupNotFound" when trying to register targets in the target group. Here is a code snippet: ``` AmazonElasticLoadBalancing client = AmazonElasticLoadBalancingClient.builder() .... .build(); ... CreateTargetGroupRequest createTargetGroupRequest = new CreateTargetGroupRequest(); ... CreateTargetGroupResult targetGroupResult = client.createTargetGroup(createTargetGroupRequest); TargetGroup targetGroup = targetGroupResult.getTargetGroups().stream().findFirst().orElse(null); assert targetGroup != null; RegisterTargetsRequest registerTargetsRequest = new RegisterTargetsRequest(); registerTargetsRequest.setTargetGroupArn(targetGroup.getTargetGroupArn()); ... client.registerTargets(registerTargetsRequest); ``` When I get the error and go to check the target groups in the AWS Console, I can see it is there but without any registered targets. Is this some obscure timing issue? Should I put in a delay between the target group creation and registering the targets? Would it be a good idea to try the operation again if it throws the TargetGroupNotFound exception? Thanks for any suggestions.
    1
    answers
    0
    votes
    30
    views
    asked 2 months ago
  • Hi everyone, My client is asking about how would be the best way to customize the DynamoDB table used and created by Kinesis Data Streams. The main goal is reduce costs of this implementation but I can't find any information regarding this topic, so: - Is possible to customize the DynamoDB used by Kinesis Data Streams to reduce costs? - Is really necessary to use DynamoDb along with Kinesis Data Streams?
    1
    answers
    0
    votes
    64
    views
    asked 2 months ago
  • Hi all, my project repository setup is like this: . ./lambda ./infrastructure I am using CDK with Java and I have a maven build both in the "lambda" sub-folder (which contains my lambda code) and in the "infrastructure" folder (which contains my CDK code). In my CodeCatalyst workflow, I was planning to execute my "mvn package" (which builds the lambda jar file) and save this artifact. Then, this artifact needs to be used in the "cdk deploy" action. But as the "cdk deploy" action also needs the "source" input, I would need two inputs. CodeCatalyst denies that with an error message: Action InfrastructureDeployment declares 2 input artifacts which is more than the maximum count (1) Does anyone have an idea on how to resolve that? Thanks! Johannes ``` BaseInfrastructureDeployment: Identifier: aws/cdk-deploy@v1 Configuration: CdkRootPath: infrastructure Region: eu-central-1 StackName: BaseStack Compute: Type: Lambda Environment: Connections: - Role: CodeCatalystPreviewDevelopmentAdministrator-z4s5g1 Name: "916032256060" Name: alpha DependsOn: - Build Inputs: Variables: - Name: CODEBUILD_SRC_DIR_LambdaBuildOutput Value: /artifacts/BaseInfrastructureDeployment/lambda_build/lambda/target/lambda-1.0.0-jar-with-dependencies.jar - Name: xxx Value: test Artifacts: - lambda_build Sources: - WorkflowSource ```
    4
    answers
    2
    votes
    51
    views
    profile picture
    asked 3 months ago
  • We're using the `GlueSchemaRegistryDeserializerDataParser` class from https://github.com/awslabs/aws-glue-schema-registry. This seems to be from the v1 of the AWS SDK (or am I wrong?) Is there a replacement in aws-sdk-java-v2 (https://github.com/aws/aws-sdk-java-v2)?
    0
    answers
    0
    votes
    31
    views
    Jules
    asked 3 months ago
  • We're using the `GlueSchemaRegistryDeserializerDataParser` class from https://github.com/awslabs/aws-glue-schema-registry. This seems to be from the v1 of the AWS SDK (or am I wrong?) Is there a replacement in aws-sdk-java-v2 (https://github.com/aws/aws-sdk-java-v2)?
    0
    answers
    0
    votes
    13
    views
    Jules
    asked 3 months ago
  • Hello, I am somewhat new to CDK. Or at least I haven't came across this problem. **Problem:** I created a stack in CDK and accidentally named it the wrong file. I also unwillingly used a module that I was not suppose to use for my particular use case. I now need to refactor the code in the stack to create resources without the module. I also need to rename the file. For example, I created an S3 bucket. I didn't add an event notificaiton. Now I want to add an event notification on the existing S3 bucket that I already created, in my new refactored code. **Goal: ** The solution I'm looking to achieve is to update the existing stack I created with the new code and same resources except a few different parameters, without redeploying separate resources as a separate entity. Is there a way I can make configuration changes to my existing S3 bucket by just updating the stack, despite the file name changed?
    1
    answers
    0
    votes
    725
    views
    asked 3 months ago
  • We recently migrated from self-managed Kafka instance to fully-managed AWS MSK cluster. We have only IAM based role-authentication enabled to connect to MSK cluster from local systems. When I do telnet to the public url of the cluster, I get successful response, but when trying to start my java application, it fails due to different errors. Below is my KafkaConfiguration Error : ```` Invalid login module control flag 'com.amazonaws.auth.AWSStaticCredentialsProvider' in JAAS config ```` ```` @Configuration public class KafkaConfiguration { @Value("${aws.kafka.bootstrap-servers}") private String bootstrapServers; @Value("${aws.kafka.accessKey}") private String accessKey; @Value("${aws.kafka.secret}") private String secret; @Bean public KafkaAdmin kafkaAdmin() { AWSCredentials awsCredentials = new BasicAWSCredentials(accessKey, secret); Map<String, Object> configs = new HashMap<>(); configs.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers); configs.put(AdminClientConfig.SECURITY_PROTOCOL_CONFIG, "SASL_SSL"); configs.put(SaslConfigs.SASL_MECHANISM, "AWS_MSK_IAM"); configs.put(SaslConfigs.SASL_JAAS_CONFIG, "com.amazonaws.auth.AWSCredentialsProvider com.amazonaws.auth.AWSStaticCredentialsProvider(" + awsCredentials + ")"); return new KafkaAdmin(configs); } @Bean public ProducerFactory<String, String> producerFactory() { AWSCredentials awsCredentials = new BasicAWSCredentials(accessKey, secret); Map<String, Object> configProps = new HashMap<>(); configProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers); configProps.put("security.protocol", "SASL_SSL"); configProps.put(SaslConfigs.SASL_MECHANISM, "AWS_MSK_IAM"); configProps.put(SaslConfigs.SASL_JAAS_CONFIG, "com.amazonaws.auth.AWSCredentialsProvider com.amazonaws.auth.AWSStaticCredentialsProvider(" + awsCredentials + ")"); return new DefaultKafkaProducerFactory<>(configProps); } @Bean public KafkaTemplate<String, String> kafkaTemplate() { return new KafkaTemplate<>(producerFactory()); } } ```` **Consumer Configuration :** ```` @EnableKafka @Configuration public class KafkaConsumerConfig { @Value("${aws.kafka.bootstrap-servers}") private String bootstrapServers; @Value("${aws.kafka.accessKey}") private String accessKey; @Value("${aws.kafka.secret}") private String secret; public ConsumerFactory<String, String> consumerFactory() { AWSCredentials awsCredentials = new BasicAWSCredentials(accessKey, secret); Map<String, Object> configProps = new HashMap<>(); configProps.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers); configProps.put("security.protocol", "SASL_SSL"); configProps.put(SaslConfigs.SASL_MECHANISM, "AWS_MSK_IAM"); configProps.put(SaslConfigs.SASL_JAAS_CONFIG, "com.amazonaws.auth.AWSCredentialsProvider com.amazonaws.auth.AWSStaticCredentialsProvider(" + awsCredentials + ")"); configProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class); configProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class); configProps.put(ConsumerConfig.GROUP_ID_CONFIG, "iTopLight"); return new DefaultKafkaConsumerFactory<>(configProps); } @Bean public KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, String>> rawKafkaListenerContainerFactory() { ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>(); factory.setConsumerFactory(consumerFactory()); return factory; } } ````
    0
    answers
    0
    votes
    47
    views
    asked 3 months ago

Popular users

see all
1/17