Questions tagged with Developer Tools
Content language: English
Sort by most recent
I deleted a space several days ago from within CodeCatalyst. I also made sure that it was deleted from the associated AWS account.
I've tried to create a new space using that same name, but CodeCatalyst tells me that the `Space name is already taken`.
Do I need to carry out some other action to make the name available again?
We have an existing Java 17 app running SpringBoot 2.7.9 and are upgrading to SpringBoot 3.0.3
Spring Boot 3 uses the Jakarta EE 10 - as a result the javax.servlet.Filter is no longer available (instead its jakarta.servlet.Filter)
ref: https://github.com/spring-projects/spring-boot/wiki/Spring-Boot-3.0-Migration-Guide#jakarta-ee
Given the above, we can no longer trace incoming requests - https://docs.aws.amazon.com/xray/latest/devguide/xray-sdk-java-filters.html
Are there alternatives are available to resolve this? (eg updated AWS SDK, alternative filter suggestions?)
-- updated --
I see in the github repo that a Jakarta compatible servlet has been merged into the master branch, so I guess this is just pending a new release
ref : https://github.com/aws/aws-xray-sdk-java/pull/372
we have different account for AWS, we are running AWS toolkit from VScode, I was able to setup our sso account profile in config file for different account and we are able to successfully login through Terminal. But, There is a AWS explorer window that let you connect to your AWS Profile account but it only takes Default profile, how do we add another profile for SSO account in AWS explorer?

Hello,
I been messing around with AWS Application Migration Service and I noticed that when Cutover/Test server is launched there is no KeyPair. I checked the launch template and there is no option to create keypair. Any idea on this?
Thank you
Amanuel.

They create 267 Errors and fail the build. The only workaround being just excluding the tests all together.
Hello,
Not sure this is the right place for feature requests...
I'm following the [IVS DVR](https://github.com/aws-samples/amazon-ivs-dvr-web-demo) sample and it seems that fetching the live recording location on s3 is way more complex that it should be.
The current solution setup an s3 bucket listener to "catch" new `recording-started.json` files and writes a `recording-started-latest.json` file with the details to be used later.
Checking the IVS sdk I see there's a nice [GetStream](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-ivs/classes/getstreamcommand.html) command that already returns the stream details, aside from the recording location... I'm sure there's a technical reason for that but it would really make the api MUCH more friendly if the stream info would have included the recording path...
Thx.
I've been sitting here waiting to SSH into my new instance for ten minutes and this thing is STILL installing. Why is it so painfully slow? This is insane.
Created in launch template with script:
```
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
```
Hello,
is there a way to read a file from a specific branch in a CodeCommit repository using the aws-sdk for js/node?
I need to do this from a lambda function.
I saw there's a getFile method but the docs lacks of examples. What I've done so far:
```
const client = new AWS.CodeCommit({ region: "us-east-1" });
const file = await client.getFile({
filePath: "myFile.txt",
repositoryName: "myRepo",
commitSpecifier: "myBranch"
});
```
the documentation says that getFile Returns the base-64 encoded contents of a specified file and its metadata. But is also says the return type is a AWS.Request object, so I can I read it?
thanks
M
I am using the generated UI components that come standard with the Figma-UI library premade by Amplify. Specifically, a card called `ProductCard` has product information with an image attached. When configuring the component in the Amplify Studio, I want to attach a static photo as the source of the picture. What are some ways to go about this?
I have the SVG on my local machine and normal react development would bundle the SVG into the code. Should I put the local development path in the src? Should I upload the SVG to the public bucket and point the src to that URL? Can I override the card's child props so I can make its src from the local development?

```
<Image
width="154px"
height="63px"
display="block"
gap="unset"
alignItems="unset"
justifyContent="unset"
shrink="0"
position="relative"
padding="0px 0px 0px 0px"
objectFit="cover"
src="https://tmtamplifyapp-storage-c3cc73b4102934-dev.s3.amazonaws.com/public/tmt-logo.png"
onClick={() => {
mptOneOnClick();
}}
{...getOverrideProps(overrides, "mpt 1")}
></Image>
```
Then in the generated code, there is a part for overrides props. I want to keep the connection with the components' UI for future usage.
We are using Java Flow framework for swf workflow and activities. The current workflow will execute two activities, now we will need to register a new activity, and update the workflow implementation to conditionally run another activity when the workflow input meet a certain condition, so there is no change to other two activities, no change to the workflow interface itself, but will only update the workflow implementation to invoke another activity, now my question is if we deploy the change, will the in-fly workflow execution that run on the old version failed or timed out because of the reply process. I am not sure if this falls into https://docs.aws.amazon.com/amazonswf/latest/awsflowguide/java-flow-making-changes-solutions.html#use-feature-flags, so the in-fly execution won't be impacted when deploy the changes to the workflow. Please see my below code before and after
```
// Before Change
@Workflow(dataConverter = ManualOperationSwfDataConverter.class)
@WorkflowRegistrationOptions(defaultExecutionStartToCloseTimeoutSeconds = MAX_WAIT_TIME_SECONDS,
defaultTaskStartToCloseTimeoutSeconds = DEFAULT_TASK_START_TO_CLOSE_TIMEOUT_SECONDS)
public interface MyWorkflowDefinition {
@Execute(version = "1.0")
void MyWorkflow(Input input);
}
@Override
@Asynchronous
public void MyWorkflow(Input input) {
new TryCatch() {
@Override
protected void doTry() {
final Promise<Input> promise = client.runActivity1(input);
final Promise<Void> result2 = client.runActivity2(promise);
}
@Override
protected void doCatch(final Throwable e) throws Throwable {
handleError(e);
throw e;
}
};
}
```
```
// After Change
@Workflow(dataConverter = ManualOperationSwfDataConverter.class)
@WorkflowRegistrationOptions(defaultExecutionStartToCloseTimeoutSeconds = MAX_WAIT_TIME_SECONDS,
defaultTaskStartToCloseTimeoutSeconds = DEFAULT_TASK_START_TO_CLOSE_TIMEOUT_SECONDS)
public interface MyWorkflowDefinition {
@Execute(version = "1.0")
void MyWorkflow(Input input);
}
@Override
@Asynchronous
public void MyWorkflow(Input input) {
new TryCatch() {
@Override
protected void doTry() {
if (input.client == eligibleClient) {
final Promise<Input> promise1 = client.runActivity3(input);
final Promise<Input> promise2 = client.runActivity1(promise1);
final Promise<Void> result2 = client.runActivity2(promise2);
} else {
final Promise<Input> promise = client.runActivity1(input);
final Promise<Void> result2 = client.runActivity2(promise);
}
}
@Override
protected void doCatch(final Throwable e) throws Throwable {
handleError(e);
throw e;
}
};
}
```
Hi,
I'm looking for a smart solution for offering my developers a shared environment and at the same time optimizing costs. Let me explain what we do today.
I have 10 developers each of them with a personal EC2 instance. They use Ubuntu and then NICE DCV to remotely connect to the desktop and work from there. Usually, they use the EC2 for developing our internal application (coding Java, Terraform, Python, HTML, lambda, go), experimenting, and spinning up containers and when done they push the code to our centralized Git repository (Gitlab).
Now, in this scenario, I have too many EC2 instances (we try to shut them down overnight and turn them on in the morning but quite often we incur into lack of on-demand resources and we have to wait).
WHAT I WOULD LIKE TO ACHIEVE?
Ideally, I believe I could still make use of EC2 instances but by sharing one or more with multiple users, I don't see the need of having a personal EC2 instance. I could still leverage NICE DCV configured for multiple sessions, and my developers will share the same EC2 instance. I could then turn off/on the EC2 instances when not used and benefit from on-demand capacity reservation for example to be sure (I'm not really convinced reserved instances is the right approach for my case).
Do you have a smart idea for a better setup for the scenario described above?
Thank you ;)
I'm not sure if something changed on AWS, but I didn't change anything on my side on the lambda function or the db/tables in AWS Timestream.
I'm testing my lambda and it works, but it doesn't write to AWS Timestream. I'm very confused.
I can see in logs that everything goes through and I'm not seeing any errors... and when I query my Timestream DB it doesn't return anything - unless I go way back when it was working properly.