Questions tagged with DevOps
Content language: English
Sort by most recent
We have an application that requires clients to have long lasting socket connections, and have code in place application side to gracefully handle a SIGTERM event. It does depend on connections remaining established but my observation is that the SIGTERM happens after the de-registration delay has elapsed and thus all active connections are killed.
Is there a mechanism by which the SIGTERM can be sent before connection draining starts, or perhaps some other signal that would otherwise tell us that the application instance will soon be terminated?
I'm receiving error: **Receive error in Lambda: Unable to import module 'functions': No module named 'functions' Traceback**
**code: Code.fromAsset(path.resolve(__dirname, BundleLocation + name)),**
I'm deployig this via the CDK. I confirmed I have this in my code for CDK for my Lambda; however I am still receiving an error. However, when I go into Lambda to troubleshoot why my app is not working I see logs in CloudWatch and I look through Cloudwatch logs and I am receiving this error.
I also want to note my CDK stack is in typescript and my python code is in python. I'm not sure if this makes a difference. Does anyone know what could cause this error after me detailing I have the correct syntax?
Also is there any work arounds for this?

I am getting model building failed in AWS Canvas after three tries. I created model, set my parameters, ran validation and then choose Standard build. Analysis pops up with expected time of 45 mins. at 1 hour and 7 min, when the model is Generating explainability report I get Model Building Failed. I have deleted the app, deleted the model and ran this three separate times and I get the exact same results at the same time elapsed with the same error. On the models screen it shows a model, with a score and a lock IN BUILDING. When I click it refreshes page and few minutes later still shows the same.
Is possible to remove any kind of autostart feature on CodePipeline? I have 2 action in source stage, one from Codecommit and one from S3 and both generate automatically 2 different CloudWatch rules that trigger my pipeline. I also need to remove the autostart at resource creation, actually i'm using terraform to build the pipeline but in the documentation i didn't find anything related. Thanks for help!
I have a django app which uses celery beat to scan the DB and trigger tasks accordingly. I want to deploy this to elastic beanstalk, but simply applying leader_only to my celery beat invocation won't be enough as we need a way to implement protection such that the beat instance is not killed during autoscaling events.
So far I've found the following options online
Run a separate ec2 that runs celery beat - not ideal but I could make this a cheap instance since the functionality required is so simple and lightweight. I assume that if I point this at an SQS queue and have my workers pulling from that queue everything will work fine. However, it's not clear to me how to have this instance discover the tasks from my Django app short of deploying it again to the second instance and then having that beat instance interact with my queue.
Use some sort of leader selection lambda like described here (https://ajbrown.org/2017/02/10/leader-election-with-aws-auto-scaling-groups.html) for my EB autoscaling group. This seems a bit extra complicated, in order to implement this I'm guessing the idea is to have a script in my container commands that checks if it is the leader instance (as assigned by the leader tag in the above tutorial) and only execute celery beat if this is the case.
Ditch SQS and use an Elasticache Redis instance as my broker, then install the redbeat scheduler (https://github.com/sibson/redbeat) to prevent multiple instances of a beat service from running. I assume this wouldn't affect the tasks it spawns though correct? My beat tasks spawn several tasks of the same 'type' with different arguments (would like an idiot check on this if possible).
My question is, can anyone help me assess the pros and cons of these implementations in terms of cost and functionality? Is there a better, more seamless way to ensure that celery beat simply runs on one instance alone, while my celery workers scale with my autoscaling infrastructure? AWS newbie so would greatly appreciate any help!
I want to create a code pipeline with 3 stages source and build stage should in region A and deploy stage should be in another region B. How to write a cloud formation template or terraform to achieve this
I got a SSM automation document, which does have 5 steps/codes.
right now it works perfect. once i ran it, i can click on each executed step-ID , check its output.
but i want to know is there any way to accumulate all 5*steps output and send everything as an email once ssm finished running ???
or
anyway to collect outputs from all 5*step-id's and put it in a 6th step and send it as email as part of the ssm document itself. ?
I'm searching for a good way to automate migrating a DAG between multiple instances (staging/production) as part of a DevOps workflow. I would like to be able to run my DAGs in my staging environment with different configuration parameters (S3 bucket paths, etc.) and run the same DAG in my production environment without requiring a change to the DAG code (automate the migration).
Here is what I'm considering:
1. Set an environment variable in Airflow/MWAA instance as part of initial setup (e.g. env=staging, env=prod)
2. Create json configuration file with staging and production configuration parameters and store it with the DAGs
3. Create a DAG that is a prerequisite for any DAGs which require configuration that checks Airflow environment variable and sets variables to staging/prod configuration parameters
4. Use templated variables in DAGs requiring configuration
Is there a better way to approach this? Any advice is appreciated!
Hello,
I receive the error in Lambda: **Receive error in Lambda: Unable to import module 'functions': No module named 'functions' Traceback**
I have researched the issue and from what I understand it's a problem bundling some of python depencies and libraries. The issue I'm having, is I'm trying to find a fix for this issue via CDK. We deploy our resources via CDK and I would like to add the fix to the CDK stack.
How do I implement a deployment package with my lambda in CDK? Are there resources I can find for these steps?
Thanks
based on the docs here , https://sagemaker.readthedocs.io/en/stable/experiments/sagemaker.experiments.html. I can use the sagemaker api or experiments package to start a trial/experiment and record it's run , also I can choose what all i want to track - parameters , metrics ... of each run. I assume here, there are lot of different type of things we are track , such as artifacts, metrics , logs, scripts .... , and this can be all viewed in/within sagemaker. artifacts lives in some s3 bucket , i guess, but when it comes to the scripts that we use for training, evaluations or even preprocessing , how does it keep track of these. i assume the default is simply that there is some s3 location that saves all of the artifacts related to each run. can the scripts live in code repository, aws code commit and we can link the all the scripts associated with a run to a link in code commit? or can other git repositories , github or gitlab linked in such way?
Hi,
I've uploaded a new version of my application by zipping the source code (the application is an SSR React application), after a while, I could no longer retrieve the environment or application from the Elastic Beanstalk page.
The error I found from the browser console is `Could not retrieve "platforms." Error: AWS Query failed to deserialize response`, I've tried on different browsers, re-login and it is still the same.
My application is still running on the web (not the version that I uploaded).
Does anyone know what could be wrong? or how I can debug this, please?
Note: I haven't setup EB CLI on my local due to certain constraints.
How can I find out if an app was created with GitHub, BitBucket, GitLab?
Looking for support in transferring the hosting of an app.