Questions tagged with DevOps
Content language: English
Sort by most recent
Hi team,
I followed this [blog](https://aws.amazon.com/blogs/security/extend-aws-iam-roles-to-workloads-outside-of-aws-with-iam-roles-anywhere/)
to use IAM role for a workload outside AWS
in my case I want a Pipeline running in Azure devops to push an image into amazon ECR for example
following the blog I was able to generate credentials from the IAM role and hit AWS s3
but I'm not sure how this is applicable for a workload running in azure for example
what are the steps to follow to make a Pipeline in Azure assume an IAM role in AWS and push images to ECR
I don't know how to apply the IAM role anywhere principle in Azure
is there AWS docs /blog explaining the steps?
Thank you!
when I scan my ec2 machine using aws patch manager, I am getting this error
[ERROR]:**Error loading entrance module.**
Traceback (most recent call last):
File "/var/log/amazon/ssm/patch-baseline-operations/common_os_selector_methods.py", line 125, in _get_snapshot_info
ssm_client = client_selector.get_default_client(instance_id, region, "ssm")
File "/var/log/amazon/ssm/patch-baseline-operations/patch_common/client_selector.py", line 61, in get_default_client
I wanted to remove the python 3.7 version from my amazon-Linux completely and reinstall the python 3.9 version with yum commands
I would like to know if it is possible to deploy your artifacts to different folders in a Windows server based on the deployment group name?
We have a deployment application for EC2/ On prem deployment and we would like to deploy the same artifacts to different folders in different deployment groups.
So I am making cloud formation template main.yaml file and one other yaml file from which parameters can be taken so that I can reuse it by just making changes in the value of another yaml file where only values are other but facing issue. Please guide accordingly.
Parameters:
InstanceType:
Description: EC2 instance type
Type: String
Default: t2.micro
AllowedValues: [t2.micro, t2.small, t2.medium, m4.large]
SecurityGroupId:
Description: Security group ID for the EC2 instance
Type: AWS::EC2::SecurityGroup::Id
VpcId:
Description: VPC ID
Type: AWS::EC2::VPC::Id
KeyName:
Description: Name of the key pair to use for SSH access
Type: AWS::EC2::KeyPair::KeyName
SubnetId:
Description: Subnet ID for the EC2 instance
Type: AWS::EC2::Subnet::Id
Resources:
EC2Instance:
Type: AWS::EC2::Instance
Properties:
InstanceType: !Ref InstanceType
SecurityGroupIds: [!Ref SecurityGroupId]
KeyName: !Ref KeyName
SubnetId: !Ref SubnetId
ImageId: ami-0f8ca728008ff5af4
UserData: !Base64
Fn::Sub: |
#!/bin/bash
sudo apt-get update
sudo apt install apache2 -y
sudo systemctl start apache2
sudo systemctl enable apache2
Like we ave variables in terraform in params.yaml
How do I format and define these values or take user input
InstanceType: t2.micro
KeyName: devops
SecurityGroupIds: sg-02464c840862fddaf
SubnetId: subnet-0b2bbe1a860c1ec8f
KeyName: devops
VpcId: vpc-01491099ac5c6857a
Facing issue with formatting and defining paramsyaml file.Please guide.
Hello,
I am working on deploying an application that is packaged using Docker onto Elastic Beanstalk with a single EC2 instance currently.
I have a multi-stage Dockerfile that is as small as I could possibly make it. Initially, I tried to deploy it to Elastic Beanstalk by deploying my Dockerfile, but the builds took too long so it would fail.
So currently, I am building my image locally, pushing it to an AWS ECR repository, then deploying to elastic beanstalk using a Dockerrun.aws.json file. This, however, still gets timeout errors on deployment! When looking at the logs, it appears the build gets stopped because the command used to pull my pre-built image takes too long to download for some reason. So is there any way to increase this timeout?
I have already tried running eb deploy with the --timeout flag, but it doesn't seem to change anything. I have also tried making a config file to increase the timeout:
.ebextensions/increase-timeout.config
```
option_settings:
- namespace: aws:elasticbeanstalk:command
option_name: Timeout
value: 1800
```
But that also fails to change the 300 second timeout.
Does anyone have any idea of how I could fix this? Thanks!
We are trying to build and deploy dotnet application on windows platform in elastic beanstalk environment but deployment is getting failed with an error "Deployment completed, but with errors: During an aborted deployment, some instances may have deployed the new application version. To ensure all instances are running the same version, re-deploy the appropriate application version. Failed to deploy application. Unsuccessful command execution on instance id(s) 'i-0031f3decb3972a8f'. Aborting the operation. [Instance: i-0031f3decb3972a8f ConfigSet: Infra-WriteRuntimeConfig, Infra-EmbeddedPreBuild, Hook-PreAppDeploy, Infra-EmbeddedPostBuild, Hook-EnactAppDeploy, Hook-PostAppDeploy] Command failed on instance. Return code: 1 Output: null. Deployment Failed: Unexpected Exception Error occurred during build: Command hooks failed"
* below CF stack is failing with this error "Resource handler returned message: Error occurred during operation 'CreateApplication'." (RequestToken: <some-token-id>, HandlerErrorCode: GeneralServiceException)"
* Region: eu-weat-1
* anyone knows what could be possible reasons for this error?
```
AWSTemplateFormatVersion: 2010-09-09
Description: EMR serverless cluster
Resources:
EmrSparkApp:
Type: AWS::EMRServerless::Application
Properties:
Type: Spark
ReleaseLabel: emr-6.9.0
Outputs:
EmrSparkAppId:
Description: Application ID of the EMR Serverless Spark App
Value: !Ref EmrSparkApp
```
I want to read data from Databricks output and format the data for SageMaker training
Im trying to create a nested stack on a cloudformation template, i have declared in the parent application a reference to the http api we are using, and use this api in the child template. When i try to do the build with sam, it throws this error:
**"E0001 Error transforming template:ApiId must be a valid reference to an 'AWS::Serverless::HttpApi' resource in same template."**
**Parent template declaration:**
```
childStack:
Type: "AWS::Serverless::Application"
Properties:
Location: ./child.yaml
Parameters:
ApiId: !Ref ApiReference
```
**Child template declaration:**
```
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Parameters:
ApiId:
Type: string
Globals:
Function:
Runtime: !Ref "AWS::NoValue"
Handler: !Ref "AWS::NoValue"
Layers: !Ref "AWS::NoValue"
Resources:
lambdaFunctionLogGroup:
Type: 'AWS::Logs::LogGroup'
Properties:
Location: ./parent.yaml
LogGroupName: !Join
- '/'
- - '/aws/lambda'
- !Ref 'TournamentSubscriptionFunction'
RetentionInDays:
!FindInMap [Service, !Ref EnvironmentName, LogRetentionInDays]
lambdaFunction:
Location: ./parent.yaml
Type: AWS::Serverless::Function
Properties:
Description: Image validation for identity verification
FunctionName: !Sub '${EnvironmentName}-lambda'
PackageType: Image
Architectures: ['arm64']
Environment:
Variables:
ExampleVariable
Policies:
- CloudWatchLambdaInsightsExecutionRolePolicy
Events:
Name:
Type: HttpApi
Properties:
Path: /event-client/api/lambda
Method: POST
ApiId: !Ref ApiReference
Auth:
Authorizer: OAuth2Authorizer
VpcConfig:
SubnetIds: !Split
- ','
- Fn::ImportValue: !Sub '${EnvironmentName}-PrivateSubnets'
Tags:
Environment: !Sub '${EnvironmentName}'
Metadata:
DockerTag: nodejs16.x-v1
DockerContext: ../dist/src/client/lambda-route
Dockerfile: Dockerfile
```
Hi team,
my org relies on Azure devops Pipeline
we want to deploy from Azure to our ECS fargate cluster
but we have some consideration
- we cannot create long-lived credentials in AWS
- we don't have outbound internet connectivity in AWS from within our VPC
how can we deploy the built artifact from Azure to ECS without using AWS long-lived credentials?
i saw the solution of using a build agent [build agents](https://medium.com/hashmapinc/automate-code-deployment-with-aws-ec2-build-agents-for-your-azure-devops-pipelines-6636fe1c8e21)
can Azure assume a role in AWS without using build agents?
how can Azure Assume a role in AWS
but still, need AWS credentials
I am trying to cut down the cost of container insights, so I want to delete some metrics, that I am not using at any time. Please let me know if there is any way to delete default metrics.