By using AWS re:Post, you agree to the Terms of Use
/Compute/

Compute

Whether you are building enterprise, cloud-native or mobile apps, or running massive data clusters using AWS Compute services, AWS provides services that support virtually any workload. Work with AWS Compute services to develop, deploy, run, and scale your applications and workloads.

Recent questions

see all
1/18

We have 2 volumes can't detach and delete

We have 2 volumes(vol-63046619 and vol-2c076556) which need to be deleted. However, we can't delete them. They are not attached to any ec2 instances. The below are the command we tried. $ aws ec2 describe-volumes --region us-east-1 --volume-id vol-63046619 { "Volumes": [ { "AvailabilityZone": "us-east-1d", "Attachments": [], "Tags": [ { "Value": "", "Key": "Name" } ], "Encrypted": false, "VolumeType": "standard", "VolumeId": "vol-63046619", "State": "in-use", "SnapshotId": "snap-xxxxxxx", "CreateTime": "2012-10-01T20:29:01.000Z", "MultiAttachEnabled": false, "Size": 8 } ] } $ aws ec2 delete-volume --region us-east-1 --volume-id vol-63046619 An error occurred (IncorrectState) when calling the DeleteVolume operation: The volume 'vol-63046619' is 'in-use' $ aws ec2 detach-volume --region us-east-1 --volume-id vol-63046619 An error occurred (IncorrectState) when calling the DetachVolume operation: Volume 'vol-63046619' is in the 'available' state. $ aws ec2 describe-volumes --region us-east-1 --volume-id vol-2c076556 { "Volumes": [ { "AvailabilityZone": "us-east-1d", "Attachments": [], "Tags": [ { "Value": "xxxxxxxxxxxxx", "Key": "aws:cloudformation:stack-name" }, { "Value": "", "Key": "Name" }, { "Value": "xxxxxxxx", "Key": "aws:cloudformation:logical-id" }, { "Value": "arn:aws:cloudformation:us-east-1:xxxxxxxxxxxxx:stack/xxxxxxxxxx/xxxxxx-xxxx-xxxx-xxxx-xxxx", "Key": "aws:cloudformation:stack-id" } ], "Encrypted": false, "VolumeType": "standard", "VolumeId": "vol-2c076556", "State": "in-use", "SnapshotId": "", "CreateTime": "2012-10-01T20:28:41.000Z", "MultiAttachEnabled": false, "Size": 5 } ] } $ aws ec2 delete-volume --region us-east-1 --volume-id vol-2c076556 An error occurred (IncorrectState) when calling the DeleteVolume operation: The volume 'vol-2c076556' is 'in-use' $ aws ec2 detach-volume --region us-east-1 --volume-id vol-2c076556 An error occurred (IncorrectState) when calling the DetachVolume operation: Volume 'vol-2c076556' is in the 'available' state. $ We tried detach and force detach from the console too. But it just stuck and doesn't help this case.
1
answers
0
votes
14
views
asked 6 hours ago

RDS custom oracle disk full

Hello We are currently using AWS RDS Custom Oracle and the OS disk have a root partition of 10Gb , although allocating a 42 Gb volume . Just to add a cherry atop, there is a 16Gb swap area allocated in between, which make worse to expand to the remaining disk area I could barely get out of os disk full condition killing old log files in /var/log [root@ip- /]# df -kh Filesystem Size Used Avail Use% Mounted on devtmpfs 7.6G 0 7.6G 0% /dev tmpfs 16G 7.5G 7.8G 49% /dev/shm tmpfs 7.7G 785M 6.9G 11% /run tmpfs 7.7G 0 7.7G 0% /sys/fs/cgroup /dev/nvme0n1p1 9.8G 7.9G 1.8G 83% / /dev/nvme1n1 25G 13G 11G 54% /rdsdbbin /dev/mapper/dbdata01-lvdbdata01 296G 25G 271G 9% /rdsdbdata tmpfs 1.6G 0 1.6G 0% /run/user/61001 tmpfs 1.6G 0 1.6G 0% /run/user/61005 [root@ip- aws]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT nvme0n1 259:0 0 42G 0 disk ├─nvme0n1p1 259:1 0 10G 0 part / ├─nvme0n1p128 259:3 0 1M 0 part └─nvme0n1p127 259:2 0 16G 0 part [SWAP] nvme3n1 259:6 0 75G 0 disk └─dbn0-lvdbn0 252:0 0 300G 0 lvm └─dbdata01-lvdbdata01 252:1 0 300G 0 lvm /rdsdbdata nvme2n1 259:5 0 75G 0 disk └─dbn0-lvdbn0 252:0 0 300G 0 lvm └─dbdata01-lvdbdata01 252:1 0 300G 0 lvm /rdsdbdata nvme5n1 259:8 0 75G 0 disk └─dbn0-lvdbn0 252:0 0 300G 0 lvm └─dbdata01-lvdbdata01 252:1 0 300G 0 lvm /rdsdbdata nvme1n1 259:4 0 25G 0 disk /rdsdbbin nvme4n1 259:7 0 75G 0 disk └─dbn0-lvdbn0 252:0 0 300G 0 lvm └─dbdata01-lvdbdata01 252:1 0 300G 0 lvm /rdsdbdata du -sh ./*/ 185M ./bin/ 143M ./boot/ 7.5G ./dev/ 43M ./etc/ 124K ./home/ 1.1G ./lib/ 195M ./lib64/ 16K ./lost+found/ 4.0K ./media/ 4.0K ./mnt/ 3.4G ./opt/ 0 ./proc/ 13G ./rdsdbbin/ 25G ./rdsdbdata/ 13M ./root/ 785M ./run/ 46M ./sbin/ 4.0K ./srv/ 0 ./sys/ 72K ./tmp/ 465M ./usr/ 2.5G ./var/ What im planning to do is to allocate a new swap volume , switch it on, kill the old swap and expand the original volume as the most i could. 1) Could this harm any monitoring task in AWS for RDS custom? 2)Any good soul at AWS could look into the automation scripts for RDS custom and make SWAP on a separate volume? , and allocate the 42Gb volume fully for OS? There can happen some bad stuff any OS updates that surely need more disk space
0
answers
0
votes
8
views
asked 6 hours ago

Lambda function as image, how to find your handler URI

Hello, I have followed all of the tutorials on how to build an AWS Lambda function as a container image. I am also using the AWS SAM SDK as well. What I don't understand is how do I figure out my end-point URL mapping from within my image to the Lambda function? For example in my docker image that I am using the AWS Python 3.9 image where I install some other packages and my python requirements and my handler is defined as: summarizer_function_lambda.postHandler My python file being copied into the image is the same name as above but without the .postHandler My AWS SAM Template has: AWSTemplateFormatVersion: "2010-09-09" Transform: AWS::Serverless-2016-10-31 Description: AWS Lambda dist-bart-summarizer function # More info about Globals: https://github.com/awslabs/serverless-application-model/blob/master/docs/globals.rst Globals: Function: Timeout: 3 Resources: DistBartSum: Type: AWS::Serverless::Function # More info about Function Resource: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction Properties: FunctionName: DistBartSum ImageUri: <my-image-url> PackageType: Image Events: SummarizerFunction: Type: Api # More info about API Event Source: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#api Properties: Path: /postHandler Method: POST So what is my actual URI path to do my POST call either locally or once deployed on Lambda?? When I try and do a CURL command I get an "{"message": "Internal server error"}% " curl -XPOST "https://<my-aws-uri>/Prod/postHandler/" -d '{"content": "Test data.\r\n"}' So I guess my question is how do you "map" your handler definitions from within a container all the way to the end point URI?
0
answers
0
votes
10
views
asked 7 hours ago

Athena Error: Permission Denied on S3 Path.

I am trying to execute athena queries from a lambda function but I am getting this error: `Athena Query Failed to run with Error Message: Permission denied on S3 path: s3://bkt_logs/apis/2020/12/16/14` The bucket `bkt_logs` is the bucket which is used by AWS Glue Crawlers to crawl through all the sub-folders and populate Athena table on which I am querying on. Also, `bkt_logs` is an encrypted bucket. These are the policies that I have assigned to the Lambda. ``` [ { "Action": [ "s3:Get*", "s3:List*", "s3:PutObject", "s3:DeleteObject" ], "Resource": "arn:aws:s3:::athena-query-results/*", "Effect": "Allow", "Sid": "AllowS3AccessToSaveAndReadQueryResults" }, { "Action": [ "s3:*" ], "Resource": "arn:aws:s3:::bkt_logs/*", "Effect": "Allow", "Sid": "AllowS3AccessForGlueToReadLogs" }, { "Action": [ "athena:GetQueryExecution", "athena:StartQueryExecution", "athena:StopQueryExecution", "athena:GetWorkGroup", "athena:GetDatabase", "athena:BatchGetQueryExecution", "athena:GetQueryResults", "athena:GetQueryResultsStream", "athena:GetTableMetadata" ], "Resource": [ "*" ], "Effect": "Allow", "Sid": "AllowAthenaAccess" }, { "Action": [ "glue:GetTable", "glue:GetDatabase", "glue:GetPartitions" ], "Resource": [ "*" ], "Effect": "Allow", "Sid": "AllowGlueAccess" }, { "Action": [ "kms:CreateGrant", "kms:DescribeKey", "kms:Decrypt" ], "Resource": [ "*" ], "Effect": "Allow", "Sid": "AllowKMSAccess" } ] ``` What seems to be wrong here? What should I do to resolve this issue?
1
answers
0
votes
19
views
asked 12 hours ago

Problem uploading media to AWS S3 with Django Storages / Boto3 (form a website on Lambda)

Hi all! I have a Django website which is deployed on AWS Lambda. All the static/media is stored in the S3 bucket. I managed to serve static from S3 and it works fine, however, when trying to upload media through admin (I was trying to add an article with a pic attached to it), I get a message "Endpoint request timed out". Here is my AWS and storage configuration: **ukraine101.aws.utils.py** ``` from storages.backends.s3boto3 import S3Boto3Storage StaticRootS3BotoStorage = lambda: S3Boto3Storage(location='static') MediaRootS3BotoStorage = lambda: S3Boto3Storage(location='media') ``` **settings.py** ``` STATICFILES_DIRS = [BASE_DIR / "static"] STATIC_URL = 'https://<my-bucket-name>.s3.amazonaws.com/' MEDIA_URL = 'https://<my-bucket-name>.s3.amazonaws.com/media/' MEDIA_ROOT = MEDIA_URL DEFAULT_FILE_STORAGE = 'ukraine101.aws.utils.MediaRootS3BotoStorage' STATICFILES_STORAGE = 'ukraine101.aws.utils.StaticRootS3BotoStorage' AWS_STORAGE_BUCKET_NAME = '<my-bucket-name>' AWS_S3_REGION_NAME = 'us-east-1' AWS_ACCESS_KEY_ID = '<my-key-i-dont-show>' AWS_SECRET_ACCESS_KEY = '<my-secret-key-i-dont-show>' AWS_S3_SIGNATURE_VERSION = 's3v4' AWS_S3_FILE_OVERWRITE = False AWS_DEFAULT_ACL = None AWS_S3_VERIFY = True AWS_S3_CUSTOM_DOMAIN = '%s.s3.amazonaws.com' % AWS_STORAGE_BUCKET_NAME STATICFILES_LOCATION = 'static' ``` **My Article model:** ``` class Article(models.Model): title = models.CharField(max_length=250, ) summary = models.TextField(blank=False, null=False, ) image = models.ImageField(blank=False, null=False, upload_to='articles/', ) text = RichTextField(blank=False, null=False, ) category = models.ForeignKey(Category, null=True, blank=True, default='', on_delete=models.SET_DEFAULT) featured = models.BooleanField(default=False) date_created = models.DateField(auto_now_add=True) slug = AutoSlugField(populate_from='title') related_book = models.ForeignKey(Book, null=True, blank=True, default='', on_delete=models.SET_DEFAULT) def get_absolute_url(self): return reverse("articles:article-detail", kwargs={"slug": self.slug}) def get_comments(self): return Comment.objects.filter(article=self.id) author = models.ForeignKey(User, null=True, blank=True, default='', on_delete=models.SET_DEFAULT) ``` **AWS bucket policy:** ``` { "Version": "2012-10-17", "Statement": [ { "Sid": "PublicRead", "Effect": "Allow", "Principal": "*", "Action": [ "s3:PutObject", "s3:PutObjectAcl", "s3:GetObject", "s3:GetObjectVersion", "s3:GetObjectAcl" ], "Resource": "arn:aws:s3:::<my-bucket-name>/*" } ] } ``` **CORS:** ``` [ { "AllowedHeaders": [ "*" ], "AllowedMethods": [ "GET", "POST", "PUT", "HEAD" ], "AllowedOrigins": [ "*" ], "ExposeHeaders": [], "MaxAgeSeconds": 3000 } ] ``` **User permissions policies (there are two attached): ** Policy 1: ``` { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:ListAllMyBuckets" ], "Resource": "arn:aws:s3:::*" }, { "Effect": "Allow", "Action": [ "s3:ListBucket", "s3:GetBucketLocation", "s3:ListBucketMultipartUploads", "s3:ListBucketVersions" ], "Resource": "arn:aws:s3:::<my-bucket-name>" }, { "Effect": "Allow", "Action": [ "s3:*Object*", "s3:ListMultipartUploadParts", "s3:AbortMultipartUpload" ], "Resource": "arn:aws:s3:::<my-bucket-name>/*" } ] } ``` Policy 2: ``` { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:*", "s3-object-lambda:*" ], "Resource": [ "arn:aws:s3:::<my-bucket-name>", "arn:aws:s3:::<my-bucket-name>/*" ] } ] } ``` Please, if someone knows what can be wrong and why this timeout is happening, help me.
0
answers
0
votes
5
views
asked 14 hours ago

Run fleet with on demand instance across AZ

Hello, I wanted to start EC2 fleet with on-demand instances only, and I wanted them to be distributed across availability zones. Unfortunately, I couldn't find a way to do that, and all the instances are always started in a single AZ. That is not a problem with spot instances, as they spawn in all the AZ. I was trying to try different allocation strategies and priorities, but nothing helped. I was trying to do so in AWS-CDK, using both `CfnEC2Fleet` [link](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_ec2.CfnEC2Fleet.html) as well as `CfnSpotFleet` [link](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_ec2.CfnSpotFleet.html). Bellow is my code. Is there way how to achieve that, or do I need to use something else? Thank you. ```typescript const spotFleet = new CfnSpotFleet(stack, 'EC2-Fleet', { spotFleetRequestConfigData: { allocationStrategy: 'lowestPrice', targetCapacity: 8, iamFleetRole: fleetRole.roleArn, spotMaintenanceStrategies: { capacityRebalance: { replacementStrategy: 'launch-before-terminate', terminationDelay: 120, } }, onDemandTargetCapacity: 4, instancePoolsToUseCount: stack.availabilityZones.length, launchTemplateConfigs: [{ launchTemplateSpecification: { launchTemplateId: launchTemplate.launchTemplateId, version: launchTemplate.latestVersionNumber, }, overrides: privateSubnets.map(subnet => ({ availabilityZone: subnet.subnetAvailabilityZone, subnetId: subnet.subnetId, })), }], } }); const ec2Fleet = new CfnEC2Fleet(stack, 'EC2-EcFleet', { targetCapacitySpecification: { totalTargetCapacity: 6, onDemandTargetCapacity: 6, defaultTargetCapacityType: 'on-demand', }, replaceUnhealthyInstances: true, onDemandOptions: { allocationStrategy: 'prioritized', }, launchTemplateConfigs: [{ launchTemplateSpecification: { launchTemplateId: launchTemplate.launchTemplateId, version: launchTemplate.latestVersionNumber, }, overrides: privateSubnets.map(subnet => ({ availabilityZone: subnet.subnetAvailabilityZone, subnetId: subnet.subnetId, })), }] }); ``` Where `launchTemplate` is instance of [`LaunchTemplate`](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_ec2.LaunchTemplate.html) and `privateSubnets` is array of [`Subnet`](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_ec2.Subnet.html) instances, one for each AZ.
0
answers
0
votes
8
views
asked a day ago

Popular users

see all
1/18

Learn AWS faster by following popular topics

1/2