All Questions
Content language: English
Sort by most recent
I created instance with LAMP(PHP 7) few days ago, when I try to create another LAMP(PHP 7) blueprint instance. I only can select LAMP(php8) now.
I am hosting a couple of websites on Elastic Beanstalk. AWS India now only accepts manual payments at the end of each month. They don't save credit cards and Netbanking payment must be manually approved each time.
Due to health issues I am frequently in and out of hospital; last time I was admitted for 2 weeks and when I came back, I found my account got suspended for non payment (and the sites went down).
This defeats the very purpose of cloud computing if I have to be constantly around for the bill.
Is there any way to automate payments?
I tried to get Activate credits but that too got rejected, so I am left with no option now except to make manual payment each time.
If automated payments are not possible, some recommendations on alternative providers will be helpful.
We using Amazon Personalize to build some real-time recommendation model. We used the explicit impression data. A quick question we have is: when we are using the PutEvent to record the live Event, will Amazon Personalize adjust the recommendation immediately and demote or filter out those items that are not interacted in the impression data? In short, does sending impressions with PutEvents affect the recommendation for the same user immediately?
const crypto = require('crypto');
class AwsV4 {
constructor(accessKeyID, secretAccessKey) {
this.accessKeyID = accessKeyID;
this.secretAccessKey = secretAccessKey;
this.currentDateObject = new Date();
this.xAmzDate = this.getTimeStamp(this.currentDateObject);
this.currentDate = this.getDate(this.currentDateObject);
}
setPath(path) {
this.path = path;
}
setServiceName(serviceName) {
this.serviceName = serviceName;
}
setRegionName(regionName) {
this.regionName = regionName;
}
setPayload(payload) {
this.payload = payload;
}
setRequestMethod(method) {
this.httpMethodName = method;
}
addHeader(headerName, headerValue) {
this.awsHeaders = this.awsHeaders || {};
this.awsHeaders[headerName] = headerValue;
}
prepareCanonicalRequest() {
let canonicalURL = '';
canonicalURL += this.httpMethodName + '\n';
canonicalURL += this.path + '\n';
// Add the missing line to include the CanonicalQueryString
canonicalURL += '' + '\n'; // Use an empty string as there are no query string parameters in this case
let signedHeaders = '';
// Add x-amz-date header
this.addHeader('x-amz-date', this.xAmzDate);
// Sort headers lexicographically by header name (lowercase)
const sortedHeaderKeys = Object.keys(this.awsHeaders).sort((a, b) => a.toLowerCase().localeCompare(b.toLowerCase()));
for (const key of sortedHeaderKeys) {
if (key !== 'Accept' && key !== 'Accept-Language' && key !== 'Content-Type') {
signedHeaders += key.toLowerCase() + ';';
canonicalURL += key.toLowerCase() + ':' + this.awsHeaders[key] + '\n';
}
}
canonicalURL += '\n';
this.strSignedHeader = signedHeaders.slice(0, -1);
canonicalURL += this.strSignedHeader + '\n';
canonicalURL += this.generateHex(this.payload);
return canonicalURL;
}
prepareStringToSign(canonicalURL) {
let stringToSign = '';
stringToSign += 'AWS4-HMAC-SHA256' + '\n';
stringToSign += this.xAmzDate + '\n';
stringToSign += this.currentDate + '/' + this.regionName + '/' + this.serviceName + '/' + 'aws4_request' + '\n';
stringToSign += this.generateHex(canonicalURL);
return stringToSign;
}
calculateSignature(stringToSign) {
const signatureKey = this.getSignatureKey(this.secretAccessKey, this.currentDate, this.regionName, this.serviceName);
const signature = crypto.createHmac('sha256', signatureKey).update(stringToSign).digest('hex');
return signature;
}
getHeaders() {
const canonicalURL = this.prepareCanonicalRequest();
const stringToSign = this.prepareStringToSign(canonicalURL);
const signature = this.calculateSignature(stringToSign);
const authorizationHeader = this.buildAuthorizationString(signature);
this.awsHeaders['Authorization'] = authorizationHeader;
this.awsHeaders['x-amz-date'] = this.xAmzDate;
return this.awsHeaders;
}
getUpdatedHeaders() {
this.setPath('/paapi5/getitems');
this.setServiceName('ProductAdvertisingAPI');
this.setRegionName('us-east-1');
this.setRequestMethod('POST');
this.setPayload(payloadJsonString); // Use the actual payload JSON string
this.addHeader('Host', 'webservices.amazon.com');
this.addHeader('Content-Encoding', 'amz-1.0');
this.addHeader('Content-Type', 'application/json; charset=UTF-8');
this.addHeader('x-amz-date', this.xAmzDate); // Move this line up
this.addHeader('X-Amz-Target', 'com.amazon.paapi5.v1.ProductAdvertisingAPIv1.GetItems');
const headers = this.getHeaders();
return {
'Authorization': headers['Authorization'],
'X-Amz-Date': headers['x-amz-date']
};
}
buildAuthorizationString(signature) {
return 'AWS4-HMAC-SHA256' + ' ' + 'Credential=' + this.accessKeyID + '/' + this.getDate(this.currentDateObject) + '/' + this.regionName + '/' + this.serviceName + '/' + 'aws4_request' + ' ' + 'SignedHeaders=' + this.strSignedHeader + ' ' + 'Signature=' + signature;
}
generateHex(data) {
return crypto.createHash('sha256').update(data).digest('hex');
}
getSignatureKey(key, date, regionName, serviceName) {
const kSecret = 'AWS4' + key;
const kDate = crypto.createHmac('sha256', kSecret).update(date).digest();
const kRegion = crypto.createHmac('sha256', kDate).update(regionName).digest();
const kService = crypto.createHmac('sha256', kRegion).update(serviceName).digest();
const kSigning = crypto.createHmac('sha256', kService).update('aws4_request').digest();
return kSigning;
}
getTimeStamp(date) {
return date.toISOString().replace(/[:-]|\.\d{3}/g, '');
}
getDate(date) {
const year = date.getUTCFullYear();
const month = ('0' + (date.getUTCMonth() + 1)).slice(-2);
const day = ('0' + date.getUTCDate()).slice(-2);
return `${year}${month}${day}`;
}
}
const awsV4 = new AwsV4('AKIAI6QL7ST37VECNI7A', 'ZnZS++sxYuDGxP8VOSEG2uZd8Qmtup9F51wHgOkw');
const payload = {
"ItemIds": [
"B01M6V8CP4"
],
"Resources": [
"CustomerReviews.Count",
"CustomerReviews.StarRating",
"Images.Variants.Large",
"ItemInfo.Features",
"Offers.Listings.Promotions",
"Offers.Summaries.LowestPrice"
],
"PartnerTag": "timstools03-20",
"PartnerType": "Associates",
"Marketplace": "www.amazon.com"
};
const payloadJsonString = JSON.stringify(payload);
// Pass the JSON string to setPayload()
awsV4.setPayload(payloadJsonString);
const updatedHeaders = awsV4.getUpdatedHeaders();
console.log(updatedHeaders);
Been using AWS Control Tower to provision Organization accounts for awhile now - we have around 10 accounts there.
A few days ago we started to migrate customer services there and the accounts are failing to build with an 'enrollment failed' error message and no specifics given.
* I have tried to hit up AWS on the support (we don't have any) resource under the accounts sections but have had no response.
* I have also tried manually adding in an Org account in order to Set the IAM policy required but when I log in using the root account I am being told that billing has not yet been setup.
* Our Orgs are child accounts and billing belongs to the parent (management) account.
* We thought that maybe there was quota limits so we upped our Org count but it didn't resolve the issue.
* Now today I was asked to update the Control Tower accounts and some have not re-enrolled correctly.
* Also can we have a quicker turnaround on closed accounts?
This is quite critical to our deployment and its seems to be failing with no changes on our part.
Are AWS Control Tower and Organizations not fit for production use?
Hi! I read through https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/FilterAndPatternSyntax.html for CW metric filter syntax and don't think this is possible, but wanted to ask anyways to see if anyone else has the same use case. Our Glue Crawler runs drop logs into CloudWatch with the following syntax:
```
*2023-03-31T09:00:40.477-07:00 [1bd316e6-9020-456c-9c3e-7c96f80b6b6a] BENCHMARK : Running Start Crawl for Crawler staging_table
2023-03-31T09:00:40.825-07:00 [1bd316e6-9020-456c-9c3e-7c96f80b6b6a] INFO : The crawl is running by consuming Amazon S3 events.
2023-03-31T09:00:41.323-07:00 [1bd316e6-9020-456c-9c3e-7c96f80b6b6a] INFO : The number of messages in the SQS queue arn:aws:sqs:us-west-2:xxxxxxxxx:staging-crawler-queue is 8
2023-03-31T09:00:41.617-07:00 [1bd316e6-9020-456c-9c3e-7c96f80b6b6a] INFO : The number of unique events received is 2 for the target with database: staging
2023-03-31T09:02:48.853-07:00 [1bd316e6-9020-456c-9c3e-7c96f80b6b6a] BENCHMARK : Classification complete, writing results to database staging
2023-03-31T09:02:48.880-07:00 [1bd316e6-9020-456c-9c3e-7c96f80b6b6a] INFO : Crawler configured with Configuration {"Version":1.0,"CrawlerOutput":{"Partitions":{"AddOrUpdateBehavior":"InheritFromTable"}},"Grouping":{"TableGroupingPolicy":"CombineCompatibleSchemas"}} and SchemaChangePolicy {"UpdateBehavior":"LOG","DeleteBehavior":"LOG"}. Note that values in the Configuration override values in the SchemaChangePolicy for S3 Targets.
2023-03-31T09:08:29.205-07:00 [1bd316e6-9020-456c-9c3e-7c96f80b6b6a] INFO : Some files do not match the schema detected. Remove or exclude the following files from the crawler (truncated to first 200 files): staging-xxxxxxxxxxxx-us-west-2-prod/xxx/organization_id=xxxx/title_id=xxxxxxxx/land_date=2022-12-07/land_hour=17/abcdefghij.gz
2023-03-31T09:08:38.075-07:00 [1bd316e6-9020-456c-9c3e-7c96f80b6b6a] INFO : Discovered schema changes for Table staging_table in database staging
2023-03-31T09:08:54.188-07:00 [1bd316e6-9020-456c-9c3e-7c96f80b6b6a] INFO : Created partitions with values [[xxx, xxxxxx, abcde, 15], [yyy, yyyyyy, xyz, 16]] for table staging_table in database staging
2023-03-31T09:09:15.901-07:00 [1bd316e6-9020-456c-9c3e-7c96f80b6b6a] BENCHMARK : Finished writing to Catalog
2023-03-31T09:09:15.945-07:00 [1bd316e6-9020-456c-9c3e-7c96f80b6b6a] INFO : Run Summary For PARTITION:
2023-03-31T09:09:15.945-07:00 [1bd316e6-9020-456c-9c3e-7c96f80b6b6a] INFO : ADD: 2
2023-03-31T09:10:24.005-07:00 [1bd316e6-9020-456c-9c3e-7c96f80b6b6a] BENCHMARK : Crawler has finished running and is in state READY*
```
Is it possible to create a metric filter that can group by the crawl id, i.e. the `1bd316e6-9020-456c-9c3e-7c96f80b6b6a` value in the logs above. Given each time log has a time entry, is it possible to group these logs by the crawl id to extract the duration between the first and last occurrences and have it be reported as a metric with the the crawl id as a dimension. Glue does not publish any CloudWatch metrics for the crawler, and this is one option I'm exploring so we can monitor and visualize our crawl times over time.
Running an EC2 app. The site started to show a net::ERR_CONTENT_LENGTH_MISMATCH error for a couple of javascript files.
https://sqlplusplus-tutorial.couchbase.com/tutorial/#1
Any guidance on how to troubleshoot or resolve this would help.
Thank you,
James
I observe that Aurora Mysql cluster data and cluster snapshot data can be exported to S3 in Apache Parquet format as described in https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/export-cluster-data.html and https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-export-snapshot.html. However, I am wondering whether there is a way to export incremental backup data in a similar fashion. My objective is to export an Aurora Mysql cluster to a local/on-prem server and keep it refreshed daily using incremental backup. Please let me know your comments/thoughts. Thanks!
I built a POC using the Wordpress offering with Lightsail. As part of that, we migrated a domain name from Route 53 to Lightsail's DNS. The site was accessible via wordpress/lightsail. When the POC was complete, we decided not to move forward with Lightsail and deleted the instance.
The domain reappeared in AWS Route 53 with a SOA, and 4 NS records. If I run the "test record" feature in Route 53\hosted zone, I get "no error." Route 53 does not let you delete NS record so I am stuck with what is there.
If I query my domain NS via DnsChecker.org or MxToolBox, I get no response for NS or SOA. I can't get to my domain from the Internet. This has been two weeks or appox 14+ days. Any ideas?
Hello,
Since a Sunday maintenance window, we began to receive
```
ERROR: unexpected pageaddr
```
for all logical replication apps (AWS DMS and Debezium on Kafka Connect).
This error is somehow recoverable and periodic:

Otherwise DMS shows no error on this but Debezium kind of fails and recovered by connecting Postgres again.
We are wondering if this could be related to Aurora DB minor version update? Or more like we are wrongly using some DB parameters (reboot actually activate the changes).
Thanks a lot in advance :).
Best regards,
David
In my CDK project, I use a lot of Docker images for various services. These images are for different platforms, since Fargate doesn't support Spot ARM64. Building all of these images on my own machine (an Apple M1 Pro) can be quite cumbersome.
Out of curiosity, I was wondering if there is a convenient way to build these Docker images on AWS. Ideally, when I run 'cdk deploy --all', it would upload my assets to AWS, build the Docker images, and publish the results on ECR.
Do you have any ideas on how I could achieve this?
Hi guys,
I'm trying to set up data sync between 2 EC2 instances.
I'm setting up the locations, setting up mount to /
Then I create the task, specifying the folder I would like to move. Tasks run, and it shows completed, but no new data appeared on my target instance. On the task, I see only one file was moved, which I can't find.
Does anyone have any idea how to set it up?