By using AWS re:Post, you agree to the Terms of Use
/Amazon CloudFront/

Questions tagged with Amazon CloudFront

Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Annoying HLS Playback Problem On Windows But Not iOS

Hello All, I am getting up to speed with CloudFront and S3 for VOD. I have used the CloudFormation template. Uploaded an MP4, obtained the Key for the m3u8 file. I create a distribution in CF. I embed it in my webpage. For the most part, it works great. But there is a significantly long buffering event during the first few seconds. This problem does not exist when I play the video on my iOS device. And strangely, it does not happen when I play it in Akami's HLS tester on my Windows 11 PC using Chrome. The problem seems to only occur when I play it from my website, using any browser, on my Windows 11 PC. Steps I take to provoke the issue: Open an Incognito tab in Chrome / navigate to my website, my player is set to auto play so it auto plays / the video starts out a bit fuzzy, it then stops for a second / restarts with great resolution / and stays that way until the endo f the video. If I play again, no problems at all, but that is to be expected. I assume there is a local cache. Steps I have tried to fix / clues: I have tried different segment lengths via modifying the Lambda function created when the stack was formed by the template. The default was 5. At that setting, the fuzzy aspect lasted the longest but the buffer event seemed slightly shorter. At 1 and 2, the fuzzy is far shorter but the buffering event is notably longer. One thought, could this be related to the video player I am using? I wanted to use the AWS IVS but could not get it working the first go around so I tried the amazon-ivs-videojs. That worked immediately, except for the buffer issue. And as the buffer issue seems to go away when I test the distribution via the Akami HLS tester. As always, much appreciation for reading this question and any time spent pondering on it.
0
answers
0
votes
4
views
Redbone
asked 2 days ago

Cloudfront Multiple Distributions Automatic Directs

Hello, I have a question, I have 2 cloudfront distributions with 2 different certificates / domains that point to the same S3 Bucket # main distribution is 123456789.cloudfront.net, with alternate domain + certificate: main.mydomain.com # second distribution is 987654321.cloudfront.net, with alternate domain + certificate: sub1.otherdomain.com On DNS (I use cloudflare) I have a CNAME for the main distribution domain: # main.mydomain.com cname to 123456789.cloudfront.net and I add other subdomains pointing to this other CNAME (for better management as I have many subdomains): # sub1.mydomain.com cname to main.mydomain.com but I also do point subdomain from the other domain to this (again because of management and some hardcoded links, **so I can't point it to it's own distribution**): # sub1.otherdomain.com cname to main.mydomain.com On theory I would need to use **cloudfront function to redirect the sub1.otherdomain.com to it's distribution (987654321.cloudfront.net)**, but it works without it and I don't know why (it shouldn't or there is some universal property of cloudfront I'm not aware about), because **there is no pointing / redirect from first distribution to the second one** configured, the **only DNS pointing to cloudfront is from main.mydomain.com** (cname to 123456789.cloudfront.net), and the **certificates are different**. Is it expected? Need to be sure for not having headaches on the future with production stuff.
1
answers
0
votes
6
views
Emerson Junior
asked 6 days ago

Serving My EC2 Hosted Website Via Cloud Front

Hello All, First question in this forum. Hope it finds all well and prospering. *The Set Up:* I have an EC2 instance running Nginx and hosting my site. Got an Elastic IP, opened all the right ports. So far, all working well. I have created a Cloud Front Distribution. It uses the domain name of my site as the source. I add a subdomain, cdn.mydomain.com, as an alternate domain at the Distribution Interface. I have created Sectigo ssl certs on my server and created them for my distribution, using the CloudFront interface. SSL is enabled for mydomain.com, www.mydomain.com, and cdn.mydomain.com. I set cdn.mydomain.com as a CloudFront Alias record in my Route 53 DNS. It points to the URL of the CloudFront distribution. I wait until the changes are deployed before testing. My subdomain, cdn.mydomain.com works. The URL for my distribution, dyzmywebsitejbpe.cloudfront.net works. The distribution is obtaining content from my EC2 instance using mydomain.com as the source. It seems that all is well. *Conflict Ensues:* I want it so that when end users enter mydomain.com into their browser, they get content via my CloudFront distribution vs getting it from my server. To make that happen I have tried all kinds of combos. The way I understand things, the following should have worked. *Heroic Action:* I change the source of my distribution from mydomain.com to cdn.mydomain.com. Then I change two DNS records. First, cdn.mydomain.com goes from pointing to the CloudFront URL to the IP of my EC2 web server. Second, mydomian.com goes from being aimed at the IP to pointing at the CloudFront URL. Using Distribution interface, I remove cdn.mydomain from alternate domains and add mydomain.com. Essentially I have flipped the roles played by mydomain.com and cdn.mydomain.com. I wait for things to propagate and redeploy. *The All Is Lost Moment:* At this point, nothing works. I get an error. Too many redirects. None of the URLs that worked above, work now. *Déjà All Over Again:* I reverse things. All is well. cdn.mydomain and the Distribution URLs seem to serve content via my CloudFront Distribution. And mydomain.com works as expected because it points at the IP of my server. *A Man Has Got To Know His Limitations:* My obviously flawed understanding is that I need to have a subdomain as the source of the Distribution and that said subdomain, cdn.mydomain.com, should point at the ip of my web server. I figure Nginx needs know about that subdomain and have added it to my .conf file. I assume that the DNS for mydomain.com should then point at the Distribution url. I toss in a trusted source set of certificates for all three involved domains on my server and tell Nginx what it needs to know about them. I attach a certificate generated by aws, for all three of my domains via the distribution alternate url tab of the Distribution interface. But I only allow one alternate url, which per understanding, should be mydomain.com. *Confession Is Good For The Solution:* I am stuck in a loop. I can't get out of it. What am I missing? Thanks for any help! John Ullom
2
answers
0
votes
11
views
Redbone
asked 6 days ago

Long response time for cloudfront misses

Need some help debugging this long response time I'm seeing from my cloudfront CDN for images that have not been cached. The outline of our setup is that we have a cloudfront cdn that responds with cached images when available. If no cached image is available, there's a lambda that pulls the requested image from s3 and resizes it using sharp.js, then sends the resized image as the response to the request. Cloudfront caches this image and then uses it for subsequent requests for the same image. The problem is that this handling takes 2-3s usually. You can see in [this](https://i.stack.imgur.com/uCt4W.png) screenshot. I'm only partially aware of the breakdown of those 2-3s. That screenshot is of logs from cloudfront, so that means the problem must lie somewhere within our cloudfront setup. The lambda itself takes 800-1300ms from start to finish, and that includes the time it takes to pull the image from s3, resize it, convert it to a buffer, then respond to the request. We already use the [http keepAlive](https://aws.amazon.com/blogs/networking-and-content-delivery/leveraging-external-data-in-lambdaedge/) optimization to reduce the latency of pulling the image from s3. However the lambda's running time is often 50% or less of the total response time, so that means there's another significant bottleneck elsewhere that I haven't discovered, and I'm not sure how to go about finding it. I've tried enabling AWS X-Ray to get more insights into the problem but our lambda is on Lambda@Edge, which doesn't support X-Ray. What else can I investigate and where else could I look?
2
answers
0
votes
7
views
AWS-User-8778696
asked 8 days ago

AWS CDK 2: Package subpath './aws-cloudfront/lib/experimental' is not defined by "exports" in xxx/node_modules/aws-cdk-lib/package.json

I tried creating a demo for VueJS SSR using Lambda@Edge and using AWS CDK v2. The code is below ``` import { CfnOutput, Duration, RemovalPolicy, Stack, StackProps } from 'aws-cdk-lib'; import { Construct } from 'constructs'; import { Bucket } from 'aws-cdk-lib/aws-s3'; import { BucketDeployment, Source } from 'aws-cdk-lib/aws-s3-deployment' import { CloudFrontWebDistribution, LambdaEdgeEventType, OriginAccessIdentity } from 'aws-cdk-lib/aws-cloudfront'; import { Code, Function, Runtime } from 'aws-cdk-lib/aws-lambda'; import { EdgeFunction } from 'aws-cdk-lib/aws-cloudfront/lib/experimental'; export class SsrStack extends Stack { constructor(scope: Construct, id: string, props?: StackProps) { super(scope, id, props); const bucket = new Bucket(this, 'DeploymentsBucket', { websiteIndexDocument: "index.html", websiteErrorDocument: "index.html", publicReadAccess: false, //only for demo not to use in production removalPolicy: RemovalPolicy.DESTROY, }); // new BucketDeployment(this, "App", { sources: [Source.asset("../../web/dist/")], destinationBucket: bucket }); // const originAccessIdentity = new OriginAccessIdentity( this, 'DeploymentsOriginAccessIdentity', ); bucket.grantRead(originAccessIdentity); const ssrEdgeFunction = new EdgeFunction(this, "ssrHandler", { runtime: Runtime.NODEJS_14_X, code: Code.fromAsset("../../lambda/ssr-at-edge/"), memorySize: 128, timeout: Duration.seconds(5), handler: "index.handler" }); const distribution = new CloudFrontWebDistribution( this, 'DeploymentsDistribution', { originConfigs: [ { s3OriginSource: { s3BucketSource: bucket, originAccessIdentity: originAccessIdentity }, behaviors: [ { isDefaultBehavior: true, lambdaFunctionAssociations: [ { eventType: LambdaEdgeEventType.ORIGIN_REQUEST, lambdaFunction: ssrEdgeFunction.currentVersion, } ] } ] } ], errorConfigurations: [ { errorCode: 403, responseCode: 200, responsePagePath: '/index.html', errorCachingMinTtl: 0, }, { errorCode: 404, responseCode: 200, responsePagePath: '/index.html', errorCachingMinTtl: 0, } ] } ); new CfnOutput(this, 'CloudFrontURL', { value: distribution.distributionDomainName }); } } ``` However when I tried deploying it shows something like this ``` Package subpath './aws-cloudfront/lib/experimental' is not defined by "exports" in /Users/petrabarus/Projects/kodingbarengpetra/vue-lambda-ssr/deployments/cdk/node_modules/aws-cdk-lib/package.json ``` Here's the content of the `package.json` ``` { "name": "ssr-at-edge", "version": "1.0.0", "description": "", "main": "index.js", "scripts": { "test": "jest --verbose", "build": "tsc", "watch": "tsc -w", "start": "npm run build -- -w" }, "author": "", "license": "ISC", "devDependencies": { "@types/aws-lambda": "^8.10.89", "@types/node": "^17.0.5", "ts-node": "^10.4.0", "typescript": "^4.5.4" }, "dependencies": { "vue": "^2.6.14", "vue-server-renderer": "^2.6.14" } } ``` Is there anything I miss?
1
answers
0
votes
5
views
petrabarus
asked 18 days ago
3
answers
1
votes
12
views
Keonwoong Ho
asked 20 days ago

How can I serve CloudFront assets to a naked domain I manage with a non-AWS DNS provider?

Hi all! Summary: Our DNS provider, GoDaddy, does not support apex ("A") DNS records pointing to non-static IPs. We want to serve our AWS CloudFront assets to our domain's naked domain, but CloudFront gives us a url, not a static IP. Here's the current state of our setup: * We own a domain, let's call it domain.com, through GoDaddy * We manage the DNS for this domain through GoDaddy * We store our website assets in AWS S3 * We use AWS CloudFront to serve the website assets from that S3 bucket * CloudFront gives us a url, like xyz123.cloudfront.net, that the assets are served from * CloudFront does not give us a static IP address * We use AWS Certificate Manager to apply an SSL certificate to both our naked domain "domain.com" and www subdomain "www.domain.com" * The SSL certificate is applied to the CloudFront configuration * We have a CNAME DNS record pointing the www subdomain to the CloudFront url * ie. navigating to www.domain.com properly gets served the CloudFront assets, and since we have the SSL certificate applied to this domain and the CloudFront configuration we don't encounter any SSL issues. * We use a feature on GoDaddy called Forwarding to redirect any http://domain.com naked domain requests to http://www.domain.com * Our CloudFront has a “Viewer protocol policy” of “Redirect HTTP to HTTPS”, so http://www.domain.com thereafter gets converted to https://www.domain.com Current issues that we would like to solve: * We want https://domain.com to serve the CloudFront assets. * This may involve serving CloudFront assets directly from that url or redirecting it to https://www.domain.com * We can't serve the assets directly with our current setup because GoDaddy's DNS management does not support apex records pointing to urls - it must point to an IP, and we don't get a static IP from CloudFront * In past iterations, we’ve used GoDaddy’s Forwarding feature to attempt to redirect https://domain.com to https://www.domain.com, or even http://www.domain.com, but GoDaddy’s Forwarding feature does not support HTTPS requests. * The Forwarding feature changes the A record to point to GoDaddy’s proxy server, and that proxy server does not have our SSL certificate installed, so we were getting SSL warnings. * We own another domain, let's call it other-domain.com, and we would like to redirect all requests to both the naked domain and the www subdomain (http and https) to https://www.domain.com. * We ran into a similar issue here: we can’t use GoDaddy Forwarding here to reroute https requests - it spawns an SSL warning. We imagine the solutions may be: 1. Get a static IP from CloudFront. Is this possible? And are there time, energy, and money costs associated with this? 2. Use our own redirect server. We could potentially manage a simple, say, AWS EC2 instance that uses an nginx or Apache server that redirects requests to https://www.domain.com. We could point the naked domain to the IP of the EC2 instance, and have our own SSL certificate installed there. We're not crazy about this because it adds another node of complexity that we manage. We would be more interested in this option if there was some AWS service that gave us SSL-enabled redirect capabilities out of the box - does that exist? 3. Change our DNS provider from GoDaddy to AWS Route53. As far as we can tell Route53 allows apex DNS records to point to urls instead of requiring them to point to IP addresses, which means we can just point an A record for domain.com to the CloudFront url. We're also not crazy about this because migrating DNS providers is a lift, and we have many other domains managed through GoDaddy as well. Any and all feedback / suggestions is welcome. Thank you!
1
answers
0
votes
9
views
samgqroberts
asked a month ago

Best way to expose files from EFS over HTTP(S)?

I have some dynamically-generated files *(more context below)* stored on EFS and need to expose these files over HTTPS. I'm wondering what the best way to do this would be… I've thought of a few ideas, some might be doable and others might not, I'm curious to see what other people think: 1. Setup a Cloudfront distribution and register my EFS as an Origin. This works fine for S3 but doesn't seem to be possible for EFS :-( 2. Setup some replication mechanism that would upload files to S3 as soon as they are created in EFS. I haven't checked yet if EFS can generate an Event *(maybe to EventBridge?)* when a file has just been created, but if it can, plugging another Lambda to copy from EFS to S3 would work… Or maybe a managed service would be able to do that for me? *(I don't really want to update my code to raise an event when a file has been generated, I'd rather have AWS generate that event automatically)* 3. Setup a Cloudfront -> API Gateway -> Lambda that would serve the file from EFS. Executing a lambda to serve a file is not optimal from a "cost" point of view, but those files could be cached by Cloudfront *forever*, making this approach OK-ish. Does one of these approaches sound like what you would do? Do you have another idea / recommendation? . More context: * The files are created on EFS by a lambda function -- when that Lambda function is called, it downloads an image and generates a thumbnail. That thumbnail is stored, as a *not-too-big* file, on EFS. * If the Lambda was running my own code, I would change it to write the thumbnail to S3 *(and set up a Cloudfront distribution to serve the thumbnails over HTTPS, idea #1)*. But this is not my code and I'm not too fond of modifying it… * When a thumbnail is generated, it needs to be available over HTTP quickly (delay of 1-5 seconds is Okay-ish, 1-5 minutes is not OK). * After a thumbnail has been generated, it is never updated. And thumbnails are rarely deleted (and keeping old "deleted" thumbnails for even days is OK) * Estimates: there will be between one and ten thousands thumbnails on EFS. Total size will be between 1 and 10 GB or so. * I expect only a few (a dozen, max) new thumbnails will be generated each day, which means a non-serverless and always-running approach will not be optimal from a "cost" point of view.
2
answers
1
votes
70
views
Pascal MARTIN
asked 2 months ago

How to validate header values in the API Gateway request before the integration

Many customers have been trying to restrict the access to APIs on API Gateway from their CloudFront distribution only. The forms of restriction can come as: - Have an allow list of IP CIDRs that Cloudfront use, but this can be bypassed if the attacker uses a proxy to reach the target - HTTP Headers, that can be validated in multiple ways as ex: - Custom Authorizer that will validate the normal authentication header and any extra header that CF could include - API Gateway in Proxy Mode where the app will deal with the request authentication and any extra header that CF could include My idea is to validate the header before we hit the integration phase or even waste processing cycles to invoke a lambda function in the custom authorizer if the request didn't come from my trusted source the API Gateway will drop the request earlier. My current solution that maybe is not ideal is: Add a required header in the Method Request with something like: `X-CDN-XXXXXXXXXX` where `XXXXXXXXXX` is a hash that CloudFront inject in the origin request. The header is required and the value can be just `CloudFront` or if we are using this mechanism with multiple CDNs we can add the CDN name in the value. Changing the default response for Bad Parameters to stop returning the name of missing parameter as this name is sensitive now. Optional mechanism to increase the security is: - Second HTTP Header with secret in another header Adding a second header like `X-CDN-KEY` with the secret as value. If the request pass by the method request validation from the existence of those 2 headers the request move forward to the integration and the application will process the request. This approach can potentially reduce costs in processing, reduce latency, reduce the risks of DDOS attacks and increase scalability. But something that would make even better is to validate header values before we hit the integration (and potentially invoke a lambda function or any back-end to process the request) OpenAPI supports parameters as path, query, header, cookie and they can have schema to validate the parameters values using type, format, regular expressions and static values. But using API Gateway I didn't see how I can apply a model to a HTTP header and then validate the header value in the method request phase. Is that possible?
1
answers
0
votes
1
views
EXPERT
Rafael Koike
asked 2 years ago

Seamlessly switch between CloudFront distributions using Route 53?

My customer wants ultimately to migrate multiple CloudFront Distributions from one AWS Account to another but realize [it’s not quite possible](https://aws.amazon.com/premiumsupport/knowledge-center/cloudfront-migrate-account/). Right now their CloudFront Distribution is configured this way: - CNAME of the CloudFront Distribution is the same as a production customer-facing FQDN (e.g.: download-office.customer.com) - in Route53 customer-facing FQDN is pointed to CloudFront Distribution FQDN using CNAME record (e.g.: download-office.customer.com CNAME d11ipsxxxxxxx.cloudfront.net) What they want to do is to introduce an intermediate FQDN and place it in between the customer-facing FQDN and CloudFront Distribution FQDN using Route53 Alias Records. So the configuration would look like: - CNAME of the CloudFront Distribution is the same as a intermediate FQDN (e.g.: balancer-download-office.customer.com) - in Route53 intermediate FQDN is pointed to CloudFront Distribution FQDN using ALIAS record (e.g.: balancer-download-office.customer.com ALIAS d11ipsxxxxxxx.cloudfront.net) - in Route53 customer-facing FQDN is pointed to intermediate FQDN using ALIAS record (e.g.: download-office.customer.com ALIAS balancer-download-office.customer.com) It's working in their QA environment but they would like feedback on any issues. However, they are finding from support engineers that the only way to swap a CloudFront distribution without downtime is specifically [through a support case](https://aws.amazon.com/premiumsupport/knowledge-center/resolve-cnamealreadyexists-error/). The question is: **what is the best way for my customer to seamlessly switch between CloudFront distributions, and ultimately move to a CloudFront distribution in another account without downtime?**
1
answers
0
votes
2
views
Joshua_S
asked 2 years ago

Getting Access Denied when trying to access my website

Hey all. I&#39;m very much a novice so please bear with me. I have recently set up a custom domain and a static website from a s3 bucket. Everything was working well. I have attempted to get an SSL cert and have gotten everything set up with the cert. I have public access to my bucket and I&#39;m pretty sure I have all of my routes correct but I keep getting this error when trying to load the site. It seems like it&#39;s treating my index.html as XML? This is just listing my file tree in the bucket. This XML file does not appear to have any style information associated with it. The document tree is shown below. <ListBucketResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"> <Name>arosedesign.me</Name> <Prefix/> <Marker/> <MaxKeys>1000</MaxKeys> <IsTruncated>false</IsTruncated> <Contents> <Key>.gitattributes</Key> <LastModified>2020-04-29T16:27:31.000Z</LastModified> <ETag>"dcb240655dcbf79b8706d11c8c2a169c"</ETag> <Size>68</Size> <StorageClass>STANDARD</StorageClass> </Contents> <Contents> <Key>.vscode/settings.json</Key> <LastModified>2020-04-29T16:27:31.000Z</LastModified> <ETag>"19751b2a32e46d1ba1477f357123a898"</ETag> <Size>42</Size> <StorageClass>STANDARD</StorageClass> </Contents> <Contents> <Key>README.md</Key> <LastModified>2020-04-29T16:27:31.000Z</LastModified> <ETag>"3ed9f715ca05b78aceb5705a682af4dd"</ETag> <Size>157</Size> <StorageClass>STANDARD</StorageClass> </Contents> <Contents> <Key>css/styles.css</Key> <LastModified>2020-04-29T16:27:31.000Z</LastModified> <ETag>"819f2e7d555f28282eb54de12d06f5ab"</ETag> <Size>5270</Size> <StorageClass>STANDARD</StorageClass> </Contents> <Contents> <Key>index.html</Key> <LastModified>2020-04-29T16:27:31.000Z</LastModified> <ETag>"6dafa06451362443dab57e5780c626cb"</ETag> <Size>3211</Size> <StorageClass>STANDARD</StorageClass> </Contents> <Contents> <Key>pics/ABC.jpg</Key> <LastModified>2020-04-29T16:27:33.000Z</LastModified> <ETag>"97f4e0eb867ba688071c56456c749908"</ETag> <Size>605156</Size> <StorageClass>STANDARD</StorageClass> </Contents> <Contents> <Key>pics/beer.jpg</Key> <LastModified>2020-04-29T16:27:32.000Z</LastModified> <ETag>"102506cb7f94ed55008f225c450d5e24"</ETag> <Size>160001</Size> <StorageClass>STANDARD</StorageClass> </Contents> <Contents> <Key>pics/chargoLogo1.jpg</Key> <LastModified>2020-04-29T16:27:32.000Z</LastModified> <ETag>"ddf16827a8f70e12c244df914c231dec"</ETag> <Size>259460</Size> <StorageClass>STANDARD</StorageClass> </Contents> <Contents> <Key>pics/flowers.jpg</Key> <LastModified>2020-04-29T16:27:32.000Z</LastModified> <ETag>"17019fb045663b188a7cfce0c3e286d9"</ETag> <Size>157952</Size> <StorageClass>STANDARD</StorageClass> </Contents> <Contents> <Key>pics/leaf.jpg</Key> <LastModified>2020-04-29T16:27:33.000Z</LastModified> <ETag>"548a9f5e5318aed8e592f57fcd1ec0a7"</ETag> <Size>118779</Size> <StorageClass>STANDARD</StorageClass> </Contents> <Contents> <Key>pics/manAndDaughter.jpg</Key> <LastModified>2020-04-29T16:27:33.000Z</LastModified> <ETag>"a795a8718715761bc175a5989e7139b1"</ETag> <Size>82798</Size> <StorageClass>STANDARD</StorageClass> </Contents> <Contents> <Key>pics/succulents.jpg</Key> <LastModified>2020-04-29T16:27:33.000Z</LastModified> <ETag>"70dfad57c2d95152c7064dc64b0b2f68"</ETag> <Size>229634</Size> <StorageClass>STANDARD</StorageClass> </Contents> <Contents> <Key>pics/wine.jpg</Key> <LastModified>2020-04-29T16:27:33.000Z</LastModified> <ETag>"50a24752b35fb2a6053c0af910d8252e"</ETag> <Size>71196</Size> <StorageClass>STANDARD</StorageClass> </Contents> <Contents> <Key>workspace/Portfolio Home Page.code-workspace</Key> <LastModified>2020-04-29T16:27:33.000Z</LastModified> <ETag>"28e6c7462fb8ab91cc2dde6256ae17be"</ETag> <Size>67</Size> <StorageClass>STANDARD</StorageClass> </Contents> </ListBucketResult> I&#39;m not sure what information you need from me to further help me but I would appreciate someone walking me through this. Thank you in advance!
1
answers
0
votes
0
views
philis
asked 2 years ago
  • 1
  • 90 / page