Welcome to AWS re:Post
re:Post gives you access to a vibrant community that helps you become even more successful on AWS
Learn AWS faster by following popular topicssee all
Benefit from the broadest selection of purpose-built databases for all your application needs
Networking & Content Delivery
Run any kind of workload with the broadest and deepest set of networking services available
Management & Governance
Enable, provision, and operate your environment for both business agility and governance control
Migration & Transfer
Simplify and accelerate migrations with the most comprehensive set of tools and services
Front-End Web & Mobile
Build web and mobile applications quickly with a full set of tools and services to support development
Recent questionssee all
Is it cheaper to store resized images on s3 or is it cheaper to use lambda edge with cloudfront to resize the image on each request?
I've built a social media app that lets people upload profile images among other things. These profile images need to be served in both the original format as well as a smaller thumbnail format. Right now i'm storing a resized thumbnail version of the original image, which costs more money then simply storing the original image only. I could choose to not store the thumbnail version and have it created by a AWS lambda whenever the resized image is requested from a certain endpoint. But running the lambda function over and over again to resize images costs money too. So which approach would be cheaper? Resizing the thumbnail image once and store it on s3 or creating a resized version of the original image on demand?
Are resized images served with cloudfront and resized with Lambda edge cached?
I've just finished reading [this AWS tutorial](https://aws.amazon.com/blogs/networking-and-content-delivery/resizing-images-with-amazon-cloudfront-lambdaedge-aws-cdn-blog/) about using cloudfront with Lambda Edge to serve resized versions of an image stored in S3. Its not entirely clear to me whether these resized images are cached or not. For instance, if person A near edge location X requests resized image https://static.mydomain.com/images/image.jpg?d=100×100 will this resized image then be cached so that it won't have to be resized again when another person requests the same resized image?
Data Mesh on AWS Lake Formation
Hi, I'm building a data mesh in AWS Lake Formation. The idea is to have 4 accounts: account 0: main account account 1: central data governance account 2: data producer account 3: data consumer I have been looking for information about how to implement the mesh in AWS and I'm following some tutorials that are very similar to what I'm doing: https://catalog.us-east-1.prod.workshops.aws/workshops/78572df7-d2ee-4f78-b698-7cafdb55135d/en-US/lakeformation-basics/cross-account-data-mesh https://aws.amazon.com/blogs/big-data/design-a-data-mesh-architecture-using-aws-lake-formation-and-aws-glue/ https://aws.amazon.com/blogs/big-data/build-a-data-sharing-workflow-with-aws-lake-formation-for-your-data-mesh/ However, after having created the bucket and uploaded some csv data to it (in the producer account), I don't know if I have to register first to the glue catalog in the producer account or I just do it in the lake formation like it says here: https://catalog.us-east-1.prod.workshops.aws/workshops/78572df7-d2ee-4f78-b698-7cafdb55135d/en-US/lakeformation-basics/databases (is this dependant on if one uses glue permissions or lake formation permissions in lake formation configuration?) Indeed I have done it first the database and the table in glue and then when I go to lake formation in the database and table sections the database and table created from glue appear there without doing anything. Even if I disable there the options: "Use only IAM access control for new databases" "Use only IAM access control for new tables in new databases" both the database and table appear there do you know if glue and lake formations share the data catalog? and I'm doing it correctly? thanks, John
Elastic Transcoder - Error In X-TIMESTAMP-MAP Output
When you use AWS Elastic Transcoder and add a caption file, you'll get a line like this in the m3u8: X-TIMESTAMP-MAP=MPEGTS:0, LOCAL:00:00:00.000 Note the space in front of LOCAL - video.js and other players can't deal with this and do not display the captions. Editing the file manually to remove the space, ex: X-TIMESTAMP-MAP=MPEGTS:0,LOCAL:00:00:00.000 Makes the captions work. Please update the X-TIMESTAMP-MAP routine to correctly remove the space or add an option to remove it in the transcode process.
Network latency inside AWS region
Hello, I would like to ask for an advice about network latency inside an AWS region. I have an EC2 instance that communicates with a Server that is hosted in the same AWS region. The Server is not owned by me, so I have only basic public information about it like domain name and IP address. My goal is to achieve the lowest possible network latency between my EC2 instance and the Server. Simply placing the EC2 instance in the same region results in a latency of GET request around 20ms, but I believe this can be still improved somehow. Are you able to give some insight please?
After RDS OS patched in an offline operation RDS Database is not accepting connection from outside the VPC
After a offline operation RDS OS patch Info: * Status is available * Connection attempts from outside the VPC is always receive timeout * Connection Inside our VPC working * Last time worked at Saturday 20:40 (outside attempts) * RDS instance - 8.0.23 * Publicly accessible - yes Note: Creating identical RDS instances in prior version to v8.0.23 Test RDS instances with same VPC configurations: * v8.0.21 - working * v8.0.23 - not working * v8.0.28 - not working
Permission denied (publickey,gssapi-keyex,gssapi-with-mic)- Windows 11
Hi all, I am trying to connect to ec2 machine (default linux) while on Windows 11 as my work machine. I am receiving the following on my CLI: C:\>ssh -i C:\AWS\pizza-key.pem email@example.com Load key "C:\\AWS\\pizza-key.pem": Permission denied firstname.lastname@example.org: Permission denied (publickey,gssapi-keyex,gssapi-with-mic). Please suggest. Thanks
IIoT Security Workshop - CloudFormation error
Hello there, I am following the IIoT Security Workshop  but I got the following error trying to deploy the CloudFormation stack: ![iiot-workshop-cloudformation-err](/media/postImages/original/IMwUrBSEjqRiCDEa2e4sEWZw) As suggested in the docs, I used 'us-east-1' region for launching. The 'Troubleshooting' section in the documentation doesn't mention something regarding this error and wondering if someone faced the same issue before. Would appreciate any help. Thanks!  https://catalog.us-east-1.prod.workshops.aws/workshops/5b543f4c-1952-4bd9-96c8-b009c16da2bc/en-US
How to stop billing for "Route 53 Resolver Network Interface"?
Hi, I have been running EC2 instances with web servers on them for months now. I have never been billed anything by Route53 other than for the domain name (and requests, within free tier). Recently, I was following an AWS docs tutorial on Route53 Resolvers. I have created some Resolver rules and endpoints. However, now I have deleted all Resolver endpoints and rules (except the default rule, which I cannot possibly delete, via console or CLI), and in my billing, I am getting billed $0.125/hour for: > $0.125 per hour per Resolver Network Interface My question: how do I stop this billing from happening? I was purely trying out the Route53 Resolvers, and now I am getting billed quite a lot of money for something I am not using and do not know how to turn off. **Things I have tried / found out:** * I am aware that the billing corresponds to the amount of ENIs (also $0.125/hour), are these the same thing? I have an ENI running in my EC2 console, but I wouldn't understand why this all the sudden would be a problem, because (as I stated before): I have never paid for Resolver Network Interfaces before. * _This list will be updated as I try more things_
support for .market domain name
Hi, We have a requirement to host our platform on AWS and wanting to use AWS Route 53 to register the domain. But, for now AWS does not seem to support ".market" domain. Our company and website need a .market domain name to be supported. Can you pls provide this support. Thanks
rds instance stuck in status stopping
After a migration with dms my rds instance was showing storage full alerts. I tried to stop the rds instance in order to increase disk capacity and memory but my instance was stuck in the "stopping" status. I tried to force restart, but it doesn't work. My instances do not have multi AZ.
How are Access Keys more secure than a username and password?
I'm preparing to sit the Cloud Practitioner certification. I have a CCNA and some experience in Network Administration however I do not have a computer science qualification. I'm confused as to how Access Keys add to the security of access AWS resources. The documentation reads: > When you use AWS programmatically, you provide your AWS access keys so that AWS can verify your identity in programmatic calls. Your access keys consist of an access key ID (for example, AKIAIOSFODNN7EXAMPLE) and a secret access key (for example, wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY). How is a human or non-human user passing Access Keys more secure than passing Username and Password to access resources? It appears (in my ignorance) to add an unnecessary layer of complexity. Surely there is a logical reason, but I can't seem to identify it.
Using Google App Scripts to connect with Amazon S3
I have a Google Sheets add-on which let users get [stock price in Google Sheets](https://workspace.google.com/marketplace/app/stock_price_in_google_sheets_finsheet/574480000400). I am trying to copy users' request in the spreadsheet to S3. I am using the same library as in this [question](https://stackoverflow.com/questions/68776250/connect-to-amazon-s3-from-google-apps-script) on Stackoverflow and I encounter the exact same error: ```` AWS Error - SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided. Check your key and signing method ```` Just wondering whether anyone has experience with this? The solution in that question does not work for me
Error with EC2 Classic migration to VPC - paravirtual
hi - I am trying to migrate EC2 classic to VPC (as per AWS) however getting the following error 'Step fails when it is Execute/Cancelling action. Property value 'paravirtual' from the API output is not in the desired values. Desired values; 'hvm'. Also the AWS advice states that the migration should be able to be done with minimal downtime but I am not finding that the case? Any resources to assist in these areas please?
Are redshift auto materialized views schema bound?
I can’t create a user defined materialized view with no schema binding, and since my etl process involves automatically recreating tables when the source ddl has changed, that means I can’t use user defined materialized views. How do auto materialized views that just became generally available last month get around this restriction?
End-to-end encryption (to be or not to be)
Hi community, What is your position on end-to-end encryption (regardless of regulations), but from a practical security point of view. Scenario: classic scenario of a web service being front-ended by an application load balancer. No questions ask we do encryption in transit for the front end part. BUT for the communication between the load balancer and the server the security position of AWS seems to be "encrypt everything" but when i read AWS documentation from sysops perspective i get the following "Terminating secure connections at the load balancer and using HTTP on the backend might be sufficient for your application. Network traffic between AWS resources can't be listened to by instances that are not part of the connection" As a security Practioner, i will push for end to end encryption but i willl like to understand this other point of view from AWS that, when reading it might suggest that the encryption between the load balancer and the EC2 is optional. I am in security now but my background is sysadmin and when i talk to operations people i dont like to just "impose" security regulations/standards/policies etc ... I like to explain why its required from a technical security point of view. When it comes to our on-prem applications ... its easy to explain the risks. But in AWS its a little bit confusing for me to justify my point when they show me AWS documentation stating that it might be enough just by encrypting the front end part of the communications.
Recent articlessee all
Amazon WorkSpaces End User Computing (EUC) Deployment Change Management Plan
published 20 days ago3 votes45 views
A Brief Primer for Applying Deep Graph Learning to Molecular Graphs
published a month ago4 votes30 views