By using AWS re:Post, you agree to the Terms of Use
/Containers/

Containers

AWS container services offer the broadest choice of services to run your containers and run on the best global infrastructure, with 77 Availability Zones across 24 regions. AWS also provides strong security isolation between your containers, ensures you are running the latest security updates, and gives you the ability to set granular access permissions for every container.

Recent questions

see all
1/18

Lambda function as image, how to find your handler URI

Hello, I have followed all of the tutorials on how to build an AWS Lambda function as a container image. I am also using the AWS SAM SDK as well. What I don't understand is how do I figure out my end-point URL mapping from within my image to the Lambda function? For example in my docker image that I am using the AWS Python 3.9 image where I install some other packages and my python requirements and my handler is defined as: summarizer_function_lambda.postHandler My python file being copied into the image is the same name as above but without the .postHandler My AWS SAM Template has: AWSTemplateFormatVersion: "2010-09-09" Transform: AWS::Serverless-2016-10-31 Description: AWS Lambda dist-bart-summarizer function # More info about Globals: https://github.com/awslabs/serverless-application-model/blob/master/docs/globals.rst Globals: Function: Timeout: 3 Resources: DistBartSum: Type: AWS::Serverless::Function # More info about Function Resource: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction Properties: FunctionName: DistBartSum ImageUri: <my-image-url> PackageType: Image Events: SummarizerFunction: Type: Api # More info about API Event Source: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#api Properties: Path: /postHandler Method: POST So what is my actual URI path to do my POST call either locally or once deployed on Lambda?? When I try and do a CURL command I get an "{"message": "Internal server error"}% " curl -XPOST "https://<my-aws-uri>/Prod/postHandler/" -d '{"content": "Test data.\r\n"}' So I guess my question is how do you "map" your handler definitions from within a container all the way to the end point URI?
0
answers
0
votes
10
views
asked 8 hours ago

Django App in ECS Container Cannot Connect to S3 in Gov Cloud

I have a container running in an EC2 instance on ECS. The container is hosting a django based application that utilizes S3 and RDS for its file storage and db needs respectively. I have appropriately configured my VPC, Subnets, VPC endpoints, Internet Gateway, roles, security groups, and other parameters such that I am able to host the site, connect to the RDS instance, and I can even access the site. The issue is with the connection to S3. When I try to run the command `python manage.py collectstatic --no-input` which should upload/update any new/modified files to S3 as part of the application set up the program hangs and will not continue. No files are transferred to the already set up S3 bucket. **Details of the set up:** All of the below is hosted on AWS Gov Cloud **VPC and Subnets** * 1 VPC located in Gov Cloud East with 2 availability zones (AZ) and one private and public subnet in each AZ (4 total subnets) * The 3 default routing tables (1 for each private subnet, and 1 for the two public subnets together) * DNS hostnames and DNS resolution are both enabled **VPC Endpoints** All endpoints have the "vpce-sg" security group attached and are associated to the above vpc * s3 gateway endpoint (set up to use the two private subnet routing tables) * ecr-api interface endpoint * ecr-dkr interface endpoint * ecs-agetn interface endpoint * ecs interface endpoint * ecs-telemetry interface endpoint * logs interface endpoint * rds interface endpoint **Security Groups** * Elastic Load Balancer Security Group (elb-sg) * Used for the elastic load balancer * Only allows inbound traffic from my local IP * No outbound restrictions * ECS Security Group (ecs-sg) * Used for the EC2 instance in ECS * Allows all traffic from the elb-sg * Allows http:80, https:443 from vpce-sg for s3 * Allows postgresql:5432 from vpce-sg for rds * No outbound restrictions * VPC Endpoints Security Group (vpce-sg) * Used for all vpc endpoints * Allows http:80, https:443 from ecs-sg for s3 * Allows postgresql:5432 from ecs-sg for rds * No outbound restrictions **Elastic Load Balancer** * Set up to use an Amazon Certificate https connection with a domain managed by GoDaddy since Gov Cloud route53 does not allow public hosted zones * Listener on http permanently redirects to https **Roles** * ecsInstanceRole (Used for the EC2 instance on ECS) * Attached policies: AmazonS3FullAccess, AmazonEC2ContainerServiceforEC2Role, AmazonRDSFullAccess * Trust relationships: ec2.amazonaws.com * ecsTaskExecutionRole (Used for executionRole in task definition) * Attached policies: AmazonECSTaskExecutionRolePolicy * Trust relationships: ec2.amazonaws.com, ecs-tasks.amazonaws.com * ecsRunTaskRole (Used for taskRole in task definition) * Attached policies: AmazonS3FullAccess, CloudWatchLogsFullAccess, AmazonRDSFullAccess * Trust relationships: ec2.amazonaws.com, ecs-tasks.amazonaws.com **S3 Bucket** * Standard bucket set up in the same Gov Cloud region as everything else **Trouble Shooting** If I bypass the connection to s3 the application successfully launches and I can connect to the website, but since static files are supposed to be hosted on s3 there is less formatting and images are missing. Using a bastion instance I was able to ssh into the EC2 instance running the container and successfully test my connection to s3 from there using `aws s3 ls s3://BUCKET_NAME` If I connect to a shell within the application container itself and I try to connect to the bucket using... ``` s3 = boto3.resource('s3') bucket = s3.Bucket(BUCKET_NAME) s3.meta.client.head_bucket(Bucket=bucket.name) ``` I receive a timeout error... ``` File "/.venv/lib/python3.9/site-packages/urllib3/connection.py", line 179, in _new_conn raise ConnectTimeoutError( urllib3.exceptions.ConnectTimeoutError: (<botocore.awsrequest.AWSHTTPSConnection object at 0x7f3da4467190>, 'Connection to BUCKET_NAME.s3.amazonaws.com timed out. (connect timeout=60)') ... File "/.venv/lib/python3.9/site-packages/botocore/httpsession.py", line 418, in send raise ConnectTimeoutError(endpoint_url=request.url, error=e) botocore.exceptions.ConnectTimeoutError: Connect timeout on endpoint URL: "https://BUCKET_NAME.s3.amazonaws.com/" ``` Based on [this article ](https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-s3.html#vpc-endpoints-policies-s3) I think this may have something to do with the fact that I am using the GoDaddy DNS servers which may be preventing proper URL resolution for S3. > If you're using the Amazon DNS servers, you must enable both DNS hostnames and DNS resolution for your VPC. If you're using your own DNS server, ensure that requests to Amazon S3 resolve correctly to the IP addresses maintained by AWS. I am unsure of how to ensure that requests to Amazon S3 resolve correctly to the IP address maintained by AWS. Perhaps I need to set up another private DNS on route53? I have tried a very similar set up for this application in AWS non-Gov Cloud using route53 public DNS instead of GoDaddy and there is no issue connecting to S3. Please let me know if there is any other information I can provide to help.
1
answers
0
votes
21
views
asked 12 hours ago
1
answers
0
votes
12
views
asked 6 days ago

Unable to override taskRoleArn when running ECS task from Lambda

I have a Lambda function that is supposed to pass its own permissions to the code running in an ECS task. It looks like this: ``` ecs_parameters = { "cluster": ..., "launchType": "FARGATE", "networkConfiguration": ..., "overrides": { "taskRoleArn": boto3.client("sts").get_caller_identity().get("Arn"), ... }, "platformVersion": "LATEST", "taskDefinition": f"my-task-definition-{STAGE}", } response = ecs.run_task(**ecs_parameters) ``` When I run this in Lambda, i get this error: ``` "errorMessage": "An error occurred (ClientException) when calling the RunTask operation: ECS was unable to assume the role 'arn:aws:sts::787364832896:assumed-role/my-lambda-role...' that was provided for this task. Please verify that the role being passed has the proper trust relationship and permissions and that your IAM user has permissions to pass this role." ``` If I change the task definition in ECS to use `my-lambda-role` as the task role, it works. It's specifically when I try to override the task role from Lambda that it breaks. The Lambda role has the `AWSLambdaBasicExecutionRole` policy and also an inline policy that grants it `ecs:runTask` and `iam:PassRole`. It has a trust relationship that looks like: ``` "Effect": "Allow", "Principal": { "Service": [ "ecs.amazonaws.com", "lambda.amazonaws.com", "ecs-tasks.amazonaws.com" ] }, "Action": "sts:AssumeRole" ``` The task definition has a policy that grants it `sts:AssumeRole` and `iam:PassRole`, and a trust relationship that looks like: ``` "Effect": "Allow", "Principal": { "Service": "ecs-tasks.amazonaws.com", "AWS": "arn:aws:iam::account-ID:role/aws-service-role/ecs.amazonaws.com/AWSServiceRoleForECS" }, "Action": "sts:AssumeRole" ``` How do I allow the Lambda function to pass the role to ECS, and ECS to assume the role it's been given? P.S. - I know a lot of these permissions are overkill, so let me know if there are any I can get rid of :) Thanks!
2
answers
1
votes
13
views
asked 7 days ago

Popular users

see all
1/18

Learn AWS faster by following popular topics

1/2