By using AWS re:Post, you agree to the Terms of Use
/Compute/

Compute

Whether you are building enterprise, cloud-native or mobile apps, or running massive data clusters using AWS Compute services, AWS provides services that support virtually any workload. Work with AWS Compute services to develop, deploy, run, and scale your applications and workloads.

Recent questions

see all
1/18

Using Elastic Beanstalk - Docker Platform with ECR - Specifying a tag via environment variable

Hi, I am trying to develop a CI/CD process, using Beanstalk's Docker platform with ECR. Code Pipeline performs the builds and manages ECR Tags & Promotions. Terraform manages the infrastructure. I am looking for an approach that allows us to use the same Dockerfile/Dockerrun.aws.json in production & non-production environments, despite wanting different tags of the same image deployed.. Perhaps from different repositories (repo_name_PROD vs repo_name_DEV ). Producing and moving Beanstalk bundles that only differ in a TAG feels unnecessary. the idea of dynamically changing Dockerfiles using the deployment process also seems fragile. What I was exploring was using a simple Environment variable: change what tag (comitHash) of an image should be based on a Beanstalk environment variable. ``` FROM 00000000000.dkr.ecr.us-east-1.amazonaws.com/repoName:${TAG} ADD entrypoint.sh / EXPOSE 8080 8787 9990 ENTRYPOINT [ "/entrypoint.sh" ] ``` Where TAG is the Git hash of the code repository from which the artifact was produced. CodeBuild has built the code and tagged the docker image. I understand that Docker supports this: ``` ARG TAG FROM 00000000000.dkr.ecr.us-east-1.amazonaws.com/repo_name:${TAG} ADD entrypoint.sh / EXPOSE 8080 8787 9990 ENTRYPOINT [ "/entrypoint.sh" ] ``` but requires building the image like this: `docker build --build-arg GIT_TAG=SOME_TAG . ` Am I correct in assuming this wil not work with the docker platform? I do not believe the EB Docker platform exposes a way to specify the build-arg. What is standard practice for managing tagged docker images in Beanstalk. I am a little leery of the `latest` tag.. as a poorly timed auto scaling event could pull an update before it should be deployed: that just does not work in my case. Updating my Dockerfile dufing deployment (via `sed` ) seems like asking for trouble.
0
answers
0
votes
1
views
bchandley
asked a day ago

Annoying HLS Playback Problem On Windows But Not iOS

Hello All, I am getting up to speed with CloudFront and S3 for VOD. I have used the CloudFormation template. Uploaded an MP4, obtained the Key for the m3u8 file. I create a distribution in CF. I embed it in my webpage. For the most part, it works great. But there is a significantly long buffering event during the first few seconds. This problem does not exist when I play the video on my iOS device. And strangely, it does not happen when I play it in Akami's HLS tester on my Windows 11 PC using Chrome. The problem seems to only occur when I play it from my website, using any browser, on my Windows 11 PC. Steps I take to provoke the issue: Open an Incognito tab in Chrome / navigate to my website, my player is set to auto play so it auto plays / the video starts out a bit fuzzy, it then stops for a second / restarts with great resolution / and stays that way until the endo f the video. If I play again, no problems at all, but that is to be expected. I assume there is a local cache. Steps I have tried to fix / clues: I have tried different segment lengths via modifying the Lambda function created when the stack was formed by the template. The default was 5. At that setting, the fuzzy aspect lasted the longest but the buffer event seemed slightly shorter. At 1 and 2, the fuzzy is far shorter but the buffering event is notably longer. One thought, could this be related to the video player I am using? I wanted to use the AWS IVS but could not get it working the first go around so I tried the amazon-ivs-videojs. That worked immediately, except for the buffer issue. And as the buffer issue seems to go away when I test the distribution via the Akami HLS tester. As always, much appreciation for reading this question and any time spent pondering on it.
0
answers
0
votes
4
views
Redbone
asked 2 days ago

Unsupported Action in Policy for S3 Glacier/Veeam

Hello, New person using AWS S3 glacier and I ran across an issue. I am working with Veeam to add an S3 Glacier to my backup. I have the bucket created. I need to add the following to my bucket policy: ``` { "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "s3:DeleteObject", "s3:PutObject", "s3:GetObject", "s3:RestoreObject", "s3:ListBucket", "s3:AbortMultipartUpload", "s3:GetBucketVersioning", "s3:ListAllMyBuckets", "s3:GetBucketLocation", "s3:GetBucketObjectLockConfiguration", "ec2:DescribeInstances", "ec2:CreateKeyPair", "ec2:DescribeKeyPairs", "ec2:RunInstances", "ec2:DeleteKeyPair", "ec2:DescribeVpcAttribute", "ec2:CreateTags", "ec2:DescribeSubnets", "ec2:TerminateInstances", "ec2:DescribeSecurityGroups", "ec2:DescribeImages", "ec2:DescribeVpcs", "ec2:CreateVpc", "ec2:CreateSubnet", "ec2:DescribeAvailabilityZones", "ec2:CreateRoute", "ec2:CreateInternetGateway", "ec2:AttachInternetGateway", "ec2:ModifyVpcAttribute", "ec2:CreateSecurityGroup", "ec2:DeleteSecurityGroup", "ec2:AuthorizeSecurityGroupIngress", "ec2:AuthorizeSecurityGroupEgress", "ec2:DescribeRouteTables", "ec2:DescribeInstanceTypes" ], "Resource": "*" } ] } ``` Once I put this in, the first error I get is "Missing Principal". So I added "Principal": {}, under SID. But I have no idea what to put in the brackets. I changed it to "*" and that seemed to fix it. Not sure if this the right thing to do? The next error I get is for all the EC2's and s3:ListAllMyBuckets give me an error of "Unsupported Action in Policy". This is where I get lost. Not sure what else to do. Do I need to open my bucket to public? Is this a permissions issue? Do I have to recreate the bucket and disable object-lock? Please help.
2
answers
0
votes
5
views
amatuerAWSguy
asked 2 days ago

Lambda Execution Function Issue For RDS Reboot

Greetings, I created a simple function taking as reference the basic Lambda in Python to start/stop RDS from here: [https://aws.amazon.com/es/blogs/database/schedule-amazon-rds-stop-and-start-using-aws-lambda/]() But I changed it for reboot purposes, so my Python code is the following: ``` # Lambda for RDS reboot given a REGION, KEY and VALUE import boto3 import os import sys import time from datetime import datetime, timezone from time import gmtime, strftime # REGION: the rds region # KEY - VALUE: the KEY and VALUE from RDS tag def reboot_rds(): region = os.environ["REGION"] key = os.environ["KEY"] value = os.environ["VALUE"] client = boto3.client("rds", region_name=region) response = client.describe_db_instances() v_readReplica = [] for i in response["DBInstances"]: readReplica = i["ReadReplicaDBInstanceIdentifiers"] v_readReplica.extend(readReplica) for i in response["DBInstances"]: # Check if the RDS is Aurora if i["Engine"] not in ["aurora-mysql", "aurora-postgresql"]: # Check if RDS is a replica instance if ( i["DBInstanceIdentifier"] not in v_readReplica and len(i["ReadReplicaDBInstanceIdentifiers"]) == 0 ): arn = i["DBInstanceArn"] resp2 = client.list_tags_for_resource(ResourceName=arn) # Check tag if 0 == len(resp2["TagList"]): print("Instance {0} tag value is not correct".format(i["DBInstanceIdentifier"])) else: for tag in resp2["TagList"]: # if tag values match if tag["Key"] == key and tag["Value"] == value: if i["DBInstanceStatus"] == "available": client.reboot_db_instance( DBInstanceIdentifier=i["DBInstanceIdentifier"], ForceFailover=False, ) print("Rebooting RDS {0}".format(i["DBInstanceIdentifier"])) elif i["DBInstanceStatus"] == "rebooting": print( "Instance RDS {0} is already rebooting".format( i["DBInstanceIdentifier"] ) ) elif i["DBInstanceStatus"] == "creating": print( "Instance RDS {0} is on creation, try later".format( i["DBInstanceIdentifier"] ) ) elif i["DBInstanceStatus"] == "modifying": print( "Instance RDS {0} {0} is modifying, try later".format( i["DBInstanceIdentifier"] ) ) elif i["DBInstanceStatus"] == "stopped": print( "Cannot reboot RDS {0} it is already stopped".format( i["DBInstanceIdentifier"] ) ) elif i["DBInstanceStatus"] == "starting": print( "Instance RDS {0} is starting, try later".format( i["DBInstanceIdentifier"] ) ) elif i["DBInstanceStatus"] == "stopping": print( "Instance RDS {0} is stopping, try later.".format( i["DBInstanceIdentifier"] ) ) elif tag["Key"] != key and tag["Value"] != value: print( "Tag values {0} doesn't match".format(i["DBInstanceIdentifier"]) ) elif len(tag["Key"]) == 0 or len(tag["Value"]) == 0: print("Error {0}".format(i["DBInstanceIdentifier"])) else: print( "Instance RDS {0} is on a different state, check the RDS monitor for more info".format( i["DBInstanceIdentifier"] ) ) def lambda_handler(event, context): reboot_rds() ``` My environment variables: | Key| Value | | --- | --- | | KEY | tmptest | | REGION | us-east-1e | | VALUE| reboot| And finally my event named 'Test' `{ "key1": "tmptest", "key2": "us-east-1e", "key3": "reboot" }` I checked the indentation of my code before execute it and its fine, but in execution of my test event I got the following output: `{ "errorMessage": "2022-01-14T14:50:22.245Z b8d0dc59-714d-4543-8651-b5a2532dfe8e Task timed out after 1.00 seconds" }` ``` START RequestId: b8d0dc59-714d-4543-8651-b5a2532dfe8e Version: $LATEST END RequestId: b8d0dc59-714d-4543-8651-b5a2532dfe8e REPORT RequestId: b8d0dc59-714d-4543-8651-b5a2532dfe8e Duration: 1000.76 ms Billed Duration: 1000 ms Memory Size: 128 MB Max Memory Used: 65 MB Init Duration: 243.69 ms 2022-01-14T14:50:22.245Z b8d0dc59-714d-4543-8651-b5a2532dfe8e Task timed out after 1.00 seconds ``` Also my test RDS has the correct tag values in order to get the reboot action but nothing, until now I cannot reboot my instance with my Lambda function. Any clue what's wrong with my code? Maybe some additional configuration issue or something in my code is not correct, I don't know. I'd appreciate if someone can give a hand with this. **UPDATE 2022/01/15** As suggestion of **Brettski@AWS** I increased the time from 1 second to 10 then I got the following error message: ``` { "errorMessage": "Could not connect to the endpoint URL: \"https://rds.us-east-1e.amazonaws.com/\"", "errorType": "EndpointConnectionError", "requestId": "b2bb3840-42a2-4220-84b4-642d17d7a9e6", "stackTrace": [ " File \"/var/task/lambda_function.py\", line 103, in lambda_handler\n reiniciar_rds()\n", " File \"/var/task/lambda_function.py\", line 16, in reiniciar_rds\n response = client.describe_db_instances()\n", " File \"/var/runtime/botocore/client.py\", line 386, in _api_call\n return self._make_api_call(operation_name, kwargs)\n", " File \"/var/runtime/botocore/client.py\", line 691, in _make_api_call\n http, parsed_response = self._make_request(\n", " File \"/var/runtime/botocore/client.py\", line 711, in _make_request\n return self._endpoint.make_request(operation_model, request_dict)\n", " File \"/var/runtime/botocore/endpoint.py\", line 102, in make_request\n return self._send_request(request_dict, operation_model)\n", " File \"/var/runtime/botocore/endpoint.py\", line 136, in _send_request\n while self._needs_retry(attempts, operation_model, request_dict,\n", " File \"/var/runtime/botocore/endpoint.py\", line 253, in _needs_retry\n responses = self._event_emitter.emit(\n", " File \"/var/runtime/botocore/hooks.py\", line 357, in emit\n return self._emitter.emit(aliased_event_name, **kwargs)\n", " File \"/var/runtime/botocore/hooks.py\", line 228, in emit\n return self._emit(event_name, kwargs)\n", " File \"/var/runtime/botocore/hooks.py\", line 211, in _emit\n response = handler(**kwargs)\n", " File \"/var/runtime/botocore/retryhandler.py\", line 183, in __call__\n if self._checker(attempts, response, caught_exception):\n", " File \"/var/runtime/botocore/retryhandler.py\", line 250, in __call__\n should_retry = self._should_retry(attempt_number, response,\n", " File \"/var/runtime/botocore/retryhandler.py\", line 277, in _should_retry\n return self._checker(attempt_number, response, caught_exception)\n", " File \"/var/runtime/botocore/retryhandler.py\", line 316, in __call__\n checker_response = checker(attempt_number, response,\n", " File \"/var/runtime/botocore/retryhandler.py\", line 222, in __call__\n return self._check_caught_exception(\n", " File \"/var/runtime/botocore/retryhandler.py\", line 359, in _check_caught_exception\n raise caught_exception\n", " File \"/var/runtime/botocore/endpoint.py\", line 200, in _do_get_response\n http_response = self._send(request)\n", " File \"/var/runtime/botocore/endpoint.py\", line 269, in _send\n return self.http_session.send(request)\n", " File \"/var/runtime/botocore/httpsession.py\", line 373, in send\n raise EndpointConnectionError(endpoint_url=request.url, error=e)\n" ] } ``` It's strange because my VPC configuration is fine, it's the same VPC of my RDS, its zone and the same security group. What else have I to consider in order to make my code work properly?
2
answers
0
votes
5
views
TEENEESE
asked 2 days ago

Popular users

see all
1/18

Learn AWS faster by following popular topics

1/2