Questions tagged with Amazon CloudFront
Content language: English
Sort by most recent
Hello. Thanks for kind reading in advance.
I am currently using BunnyCDN but the image render is not working at all(just white paper..) when the traffic increases. I am wondering why it happens ;(
To replace the CDN conveniently, I am looking for CDN provider like this
If I store the jpg files to S3 storage, can I get the unique url address link for each jpg files? I mean , if I store 100 number of jpg files , then there should be 100 unique url per each files. So that I can embed that link to my app admin panel
The important thing is, the url address should be same across the region. Suppose the file name is AAA . The url of this AAA file should be same across the region like Asia, USA, Europe region . And if the user click the url, will be cached from the nearest region automatically.
May I ask, does the Cloudfront CDN has the above function?
Thanks

This is what I see when I look up my site.
I am trying to create an rtmp push input and create a channel with this input only to livestream using aws elemental medialive apis.
I am facing this error, while creating the channel through createChannel API, while i was able to make the input with createInput API.
Error: UnprocessableEntityException: outputGroups[0].outputGroupSettings.streamName additional property "streamName" exists in object when not allowed; outputGroups[0].outputGroupSettings Object does not match "hlsGroup"
This is my channel config:
```
const channelConfig = {
Name: "test-channel",
ChannelClass: "SINGLE_PIPELINE",
InputAttachments: [
{
InputId: data.Input.Id,
InputSettings: {
AudioSelectors: [
{
Name: "Audio Selector 1",
},
],
VideoSelector: {
ColorSpace: "FOLLOW",
ColorSpaceUsage: "FORCE",
},
},
},
],
Destinations: [
{
Id: "destination1",
Settings: [
{
StreamName: "test-stream",
Url: "rtmp://a.rtmp.youtube.com/live2/h0g0-j7sx-g6ap-ztz0-3xcj",
},
],
},
],
EncoderSettings: {
AudioDescriptions: [
{
AudioSelectorName: "Audio Selector 1",
CodecSettings: {
AacSettings: {
Bitrate: 96000,
InputType: "NORMAL",
Profile: "LC",
RateControlMode: "CBR",
SampleRate: 48000,
},
},
LanguageCodeControl: "FOLLOW_INPUT",
Name: "Audio Description 1",
AudioTypeControl: "FOLLOW_INPUT",
},
],
OutputGroups: [
{
Name: "File Group",
Outputs: [
{
OutputSettings: {
HlsOutputSettings: {
NameModifier: "hls",
HlsSettings: {
StandardHlsSettings: {
AudioRenditionSets: "program_audio",
M3u8Settings: {
AudioFramesPerPes: 4,
AudioPids: "492-498",
EcmPid: "8182",
PatInterval: 0,
PcrControl: "PCR_EVERY_PES_PACKET",
PcrPeriod: 400,
PcrPid: "8181",
},
},
},
},
},
VideoDescriptionName: "Video Description",
},
],
OutputGroupSettings: {
HlsGroupSettings: {
Destination:{
DestinationRefId: "destination1"
}
}
}
},
],
TimecodeConfig: {
Source: "EMBEDDED",
},
VideoDescriptions: [
{
CodecSettings: {
H264Settings: {
AdaptiveQuantization: "HIGH",
AfdSignaling: "NONE",
Bitrate: 500000,
ColorMetadata: "INSERT",
EntropyEncoding: "CABAC",
FlickerAq: "ENABLED",
FramerateControl: "SPECIFIED",
FramerateDenominator: 1,
FramerateNumerator: 60,
GopBReference: "ENABLED",
GopClosedCadence: 1,
GopNumBFrames: 3,
GopSize: 60,
GopSizeUnits: "FRAMES",
Level: "H264_LEVEL_3",
LookAheadRateControl: "HIGH",
MaxBitrate: 500000,
MinIInterval: 0,
NumRefFrames: 3,
ParControl: "INITIALIZE_FROM_SOURCE",
Profile: "MAIN",
RateControlMode: "CBR",
ScanType: "PROGRESSIVE",
SceneChangeDetect: "ENABLED",
Slices: 1,
SpatialAq: "ENABLED",
Syntax: "DEFAULT",
TemporalAq: "ENABLED",
TimecodeInsertion: "DISABLED",
},
},
Height: 720,
Name: "Video Description",
RespondToAfd: "NONE",
ScalingBehavior: "DEFAULT",
Sharpness: 50,
Width: 1280,
},
],
},
}
```
Please help regarding the error.
Hello, i am creating react app by following https://github.com/aws-solutions/live-stream-on-aws github repo to implement live-stream-on-aws. I understood the steps of starting live, but i got stuck on cloud front. in this repo there is every step of how to create the input, medialive channel, mediapackage channel, endpoint. But i was not able to find connecting mediapackage to cloudfront. the steps i am doing in node js:
1) create media package,
2) create endpoint,
3) create input security group,
4) create input,
5) create media live channel,
in this process i am able to play live stream with endpoint url, but it is said that i have to broadcast the live on cloudfront. How can i connect cloudfront. Or is there any github repo like above one? If you can help, i would really appreciate it!
Thank you very much!
Hi
I have found a typo in Cloudfront origin URL used as an s3 bucket for redirection but realised this error needs to be corrected in Cloudformation, what changes do I make in the template as I really am new to Cloudformation.
Many thanks in advance.
I am trying to set up OpenSearch in a private VPC subnet behind a Load Balancer in a public subnet. The load balancer endpoint is in turn placed in a Cloudfront distribution. Right now I am testing this with HTTP -- will try HTTPS once we are able to set up our DNS. After configuring the security groups to allow OpenSearch and the ALB to communicate, and after adding the listener/target group, I am able to connect to OpenSearch through the load balancer endpoint. However, if I try to access via the Cloudfront endpoint, I get a 504 Error: The Request Could Not Be Satisfied. I try pinging the ALB endpoint via curl and notice that it is taking 75 seconds to respond with 200-OK. So it seems that Cloudfront is not responding due to late responses from the Load Balancer. It always takes exactly 75 seconds -- except sometimes when I fire up the cluster, the first response comes back in a fraction of a second as it should, then on all subsequent attempts it takes 75 seconds. I am in Maryland and the cluster is set up in the Oregon region. I tried this with three progressively larger instances of OpenSearch and the compute power made no difference. I've been trying to figure this out for weeks -- any suggestions on what I am doing wrong? Thanks!
Problem Statement:
The [tailwind nextjs starter template](https://github.com/timlrx/tailwind-nextjs-starter-blog) is unable to be deployed properly on AWS using Github Actions. The deployment process involves pushing the export files to S3, and displaying them using S3 + Cloudfront + Route53.
One of my domains example: https://domainA.com works by just sharing this files to S3 without exporting them (Using github actions, I share this files to s3 and then connect it with cloudfront using Origin access identity. (It is working as expected)
but another one of my domains example: https://domainB.com doesn't work and gives access denied issue. (I checked bucket policy and it allows access to s3 bucket, bucket is publicly accessible)
I want to solve above error, please suggest options.
Now coming to another problem, As I have realized that the files in S3 should be output files and
so I now export this files to s3 locations using github actions. The cloudfront is connected to s3 bucket using OAI or public origin access. Once everything is setup correctly, I am able to route to my domain but it is unable to work properly. I am assuming that the system is unable to locate additional files from S3 that it needs.
how can I also solve above error.
I'm having an issue with the **AWS Cookie-Signer V3** and **Custom Policies**. I'm currently using `@aws-sdk/cloudfront-signer v3.254.0`. I have followed the official docs of how to create and handle signed cookies - it works as long as I don't use custom policies.
### Setup
I use a custom lambda via an API Gateway to obtain the `Set-Cookie` header with my signed cookies. These cookies will be attached to a further file-request via my AWS Cloudfront instance. In order to avoid CORS errors, I have set up **custom domains** for the API Gateway as well as for the Cloudfront instance.
A minified example of the signing and the return value looks as follows:
```js
// Expiration time
const getExpTime = new Date(Date.now() + 5 * (60 * 60 * 1000)).toISOString();
// Cookie-Signer
const signedCookie = getSignedCookies({
keyPairId: "MY-KEYPAIR-ID",
privateKey: "MY-PRIVATE-KEY",
url: "https://cloudfront.example.com/path-to-file/file.m3u8",
dateLessThan: getExpTime,
});
// Response
const response = {
statusCode: 200,
isBase64Encoded: false,
body: JSON.stringify({ url: url, bucket: bucket, key: key }),
headers: {
"Content-Type": "application/json",
"Access-Control-Allow-Origin": "https://example.com",
"Access-Control-Allow-Credentials": true,
"Access-Control-Allow-Methods": "OPTIONS,POST,GET",
},
multiValueHeaders: {
"Set-Cookie": [
`CloudFront-Expires=${signedCookie["CloudFront-Expires"]}; Domain=example.com; Path=/${path}/`,
`CloudFront-Signature=${signedCookie["CloudFront-Signature"]}; Domain=example.com; Path=/${path}/`,
`CloudFront-Key-Pair-Id=${signedCookie["CloudFront-Key-Pair-Id"]}; Domain=example.com; Path=/${path}/`,
],
},
};
```
This works well if I request **a single file** from my S3 bucket. However, since I want to stream video files from my S3 via Cloudfront and according to the [AWS docs](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-creating-signed-url-custom-policy.html), **wildcard characters** are only allowed with **Custom Policies**. I need this wildcard to give access to the entire video folder with my video chunks. Again following the official docs, I have updated my lambda with:
```js
// Expiration time
const getExpTime = new Date(Date.now() + 5 * (60 * 60 * 1000)).getTime();
// Custom Policy
const policyString = JSON.stringify({
Statement: [
{
Resource: "https://cloudfront.example.com/path-to-file/*",
Condition: {
DateLessThan: { "AWS:EpochTime": getExpTime },
},
},
],
});
// Cookie signing
const signedCookie = getSignedCookies({
keyPairId: "MY-KEYPAIR-ID",
privateKey: "MY-PRIVATE-KEY",
policy: policyString,
url: "https://cloudfront.example.com/path-to-file/*",
});
```
which results in a `Malformed Policy` error.
What confuses me is that the `getSignedCookies()` method requires the `url` property even though I'm using a custom policy with the `Ressource` parameter. Since the Resource parameter is optional, I've also tried without which led to the same error.
To rule out that something is wrong with the wildcard character, I've also run a test where I've pointed to the exact file but using the custom policy. Although this works **without** custom policy, it does fail with the `Malformed Policy` error when using the custom policy.
Since there is also no example of how to use the `Cloudfront Cookie-Signer V3` with custom policies, I'd be very grateful if someone can tell me how I'm supposed to type this out!
Cheers! 🙌
We have a problem with CloudFront and something that for us seems to be a bug.
CloudFront is in our case inconsistently following the Managed SimpleCORS policy based on region.
Our configuration is a distribution with an S3 origin, where the following policies are being used:
- Viewer protocol policy: Redirect HTTP to HTTPS
- Cache policy name: Managed-CachingOptimized
- Origin request policy Name: -
- Response headers policy name: Managed-SimpleCORS
Initially it works fine, also seems to work fine if we run an invalidation for a little while, but it has happened several times now that we see that over time, the CORS response is not correct depending on the geographical region the request is being sent from.
This causes a CORS issue in e.g. Chrome.
We have even tried creating a new CloudFront distribution from scratch, configuring the behaviour and origins, and end up with the same result.
The issue is verifiable using the following CURL request:
```
curl -i 'https://dist.mysite.com/fonts/mulish-v12-latin-700.woff2' \
-H 'authority: dist.myworkout.com' \
-H 'accept: /' \
-H 'accept-language: en' \
-H 'cache-control: no-cache' \
-H 'origin: https://go.myworkout.com' \
-H 'pragma: no-cache' \
-H 'referer: https://go.myworkout.com/styles.fad176935017dfec.css' \
-H 'sec-ch-ua: "Chromium";v="110", "Not A(Brand";v="24", "Google Chrome";v="110"' \
-H 'sec-ch-ua-mobile: ?0' \
-H 'sec-ch-ua-platform: "Windows"' \
-H 'sec-fetch-dest: font' \
-H 'sec-fetch-mode: cors' \
-H 'sec-fetch-site: same-site' \
-H 'user-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36' \
--compressed
```
yields this correct result from Norway (where the access-control headers ARE included):
```
HTTP/2 200
content-type: font/woff2
content-length: 11164
access-control-allow-origin: *
access-control-allow-methods: GET
last-modified: Fri, 13 Jan 2023 20:03:47 GMT
x-amz-server-side-encryption: AES256
x-amz-version-id: vZw9469Ogk6XEvU5RIQmB288hzwQr6zG
accept-ranges: bytes
server: AmazonS3
date: Wed, 08 Feb 2023 19:52:10 GMT
etag: "d08677b723b410a78debca060c4d2ca2"
vary: Accept-Encoding
x-cache: Hit from cloudfront
via: 1.1 4bbc14b5834fc74ccd249b954b43a08c.cloudfront.net (CloudFront)
x-amz-cf-pop: OSL50-P1
alt-svc: h3=":443"; ma=86400
x-amz-cf-id: ncSFQlZj71-fHM_finid9YYvtJsCHwCaK6fL8rhfE6Hg-HDypXe3pg==
age: 54574
```
And this result from Spain (where no access-control headers are NOT included):
```
HTTP/2 200
content-type: font/woff2
content-length: 11164
last-modified: Fri, 13 Jan 2023 20:03:47 GMT
x-amz-version-id: vZw9469Ogk6XEvU5RIQmB288hzwQr6zG
accept-ranges: bytes
server: AmazonS3
date: Thu, 09 Feb 2023 10:35:27 GMT
etag: "d08677b723b410a78debca060c4d2ca2"
vary: Accept-Encoding
x-cache: Hit from cloudfront
via: 1.1 c95bbb2353ba80a0b30261c24e526ab4.cloudfront.net (CloudFront)
x-amz-cf-pop: MAD56-P4
alt-svc: h3=":443"; ma=86400
x-amz-cf-id: ux_suZMPIzk8iWSiESC9FyY6-soMs4u809-brTT00YHjRnEFpjZtug==
age: 1769
```
If we connect by VPN to Norway in Spain, we get the correct result again as the request from Norway.
There are some similar threads, but seems like these are not strictly the same and relates to some other issues.
relates to the [use of OPTIONS](https://repost.aws/questions/QU9Hn9Eb7XTZiYxVIBV-HNOQ/cloudfront-distribution-returning-incorrect-cors-headers)
Hi, continuous deployment cannot be used with a distribution that supports HTTP/3 (https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/continuous-deployment.html) . Is this going to be supported? If yes, is there a timeline for this feature.
I have a CloudFront (with a registered domain) "connected" to an S3 bucket. The `Viewer protocol policy` is set to `HTTPS only`. However, requesting `http://mydomain.com` will reply with an HTTP answer.
Tested on: securityheaders.com
**Background**
By default, AWS adds a couple of very revealing information in the response headers of a setup using CloudFront and codebuild. Among others are *x-amz-meta-codebuild-buildarn* is revealed stating the internal ARN of codebuild (including the internal name used for the project).
*Little feedback on the side:*
Would be nice, if AWS would stop sending those by default.
CloudFront provides a way to remove headers, but only in one-entry-at-a-time-text-box-fashion without wildcard support. So one needs to copy and paste every single header into that list.
**Question**
Is there another way I'm not seeing to more easily remove those headers?
*edit:*I just figured, the following headers are not even removable/overwritable at all: x-amz-cf-id, x-amz-cf-pop, x-cache, which is a pitty :(