All Questions
Content language: English
Sort by most recent
Hello,
Brand new EKS cluster latest version.
Followed the first example in this guide: https://docs.aws.amazon.com/eks/latest/userguide/cross-account-access.html
Created an OIDC Identity provider on Account1 accepting requests from the EKS cluster on account 2.
In the EKS cluster, my k8s ServiceAccount resource have an annotation eks.amazonaws.com/role-arn pointing to an IAM role in account1.
Application running in the pod is a .NET6 app with the AWSSDK.DynamoDBv2 nuget package making DynamoDB queries.
It worked for a while, until at some point I got this exception:
```
Amazon.Runtime.AmazonClientException: Error calling AssumeRole for role arn:aws:iam::AcccountNumber:role/EKS-ServiceAccount
---> Amazon.SecurityToken.Model.ExpiredTokenException: Token expired: current date/time 1680295159 must be before the expiration date/time1680281898
---> Amazon.Runtime.Internal.HttpErrorResponseException: Exception of type 'Amazon.Runtime.Internal.HttpErrorResponseException' was thrown.
```
I do see doing a kubectl describe on my pod these information:
```
Environment:
AWS_ACCESS_KEY_ID:
AWS_SECRET_KEY:
AWS_STS_REGIONAL_ENDPOINTS: regional
AWS_DEFAULT_REGION: us-east-1
AWS_REGION: us-east-1
AWS_ROLE_ARN: arn:aws:iam::AcccountNumber:role/EKS-ServiceAccount
AWS_WEB_IDENTITY_TOKEN_FILE: /var/run/secrets/eks.amazonaws.com/serviceaccount/token
Mounts:
/var/run/secrets/eks.amazonaws.com/serviceaccount from aws-iam-token (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mq27b (ro)
Volumes:
aws-iam-token:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 86400
```
I also found [this page](https://docs.aws.amazon.com/eks/latest/userguide/pod-configuration.html) mentioning it should renew at 80% expiration time and [this page](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts-minimum-sdk.html) with the minimum required SDK version. I can confirm I use AWSSDK.DynamoDBv2, AWSSDK.SecurityToken and AWSSDK.Core all version later than that (3.7.100.14).
I was expecting the EKS cluster to automatically renew the token from the OIDC provider.
Why isn't it doing it?
Possibly related to https://repost.aws/questions/QUqYIZ6_LdQomBCbJz0_63Uw/jdbc-enforce-ssl-doesnt-work-for-cloudformation-type-aws-glue-connection
As described [here](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-glue-connection-connectioninput.html), JDBC_ENFORCE_SSL is an optional property when creating a Glue Connection. However, if this value is left unspecified the created connection does not receive a default value of 'false', and any attempts to use the connection result in the following error:
```
JobName:ExampleGlueJob and JobRunId:jr_12345 failed to execute with exception Unable to resolve any valid connection (Service: AWSGlueJobExecutor; Status Code: 400; Error Code: InvalidInputException; Request ID: abcde-12345; Proxy: null)
```
Editing the connection and saving it via the Web GUI results in the `JDBC_ENFORCE_SSL: false` property being set on the connection, and it can be used without further errors.
Example CFN Template:
```
rGlueConnection:
Type: 'AWS::Glue::Connection'
Properties:
CatalogId: !Ref 'AWS::AccountId'
ConnectionInput:
ConnectionType: JDBC
ConnectionProperties:
JDBC_CONNECTION_URL: !Ref pJDBCConnectionURL
USERNAME: !Sub '{{resolve:secretsmanager:${pSecretsManagerName}:SecretString:username}}'
PASSWORD: !Sub '{{resolve:secretsmanager:${pSecretsManagerName}:SecretString:password}}'
Name: !Ref pGlueConnectionName
PhysicalConnectionRequirements:
SecurityGroupIdList: !Ref pSecurityGroupIds
SubnetId: !Ref pSubnet
```
Connection after creation (no JDBC_ENFORCE_SSL specified, jobs with connection attached fail to run):
```
ConnectionProperties:
JDBC_CONNECTION_URL: jdbc:redshift://example.com:5439/example
PASSWORD: 123
USERNAME: abc
ConnectionType: JDBC
CreationTime: '2023-03-23T13:40:36.839000-07:00'
LastUpdatedTime: '2023-03-23T13:40:36.839000-07:00'
Name: ExampleConnection
PhysicalConnectionRequirements:
SecurityGroupIdList:
- sg-1234
SubnetId: subnet-12345
```
Connection after opening and re-saving in Web Console (JDBC_ENFORCE_SSL:false specified, no error on job run):
```
ConnectionProperties:
JDBC_CONNECTION_URL: jdbc:redshift://example.com:5439/example
PASSWORD: 123
USERNAME: abc
JDBC_ENFORCE_SSL: 'false'
ConnectionType: JDBC
CreationTime: '2023-03-23T13:40:36.839000-07:00'
LastUpdatedTime: '2023-03-23T13:40:36.839000-07:00'
Name: ExampleConnection
PhysicalConnectionRequirements:
SecurityGroupIdList:
- sg-1234
SubnetId: subnet-12345
```
In my DynamoDb stream object, I have a field that is an array of strings, i.e. attribute type SS.
```
"FOO": {"SS": ["hello"]},
```
I want to filter out the event if any string in that array matches one of "x", "y", or "z" (placeholder values). I can't figure out the correct filter pattern syntax here, but it does seem possible based on the answer in https://repost.aws/questions/QUgqGseyltTceWNYpMF_2tXw/how-to-create-dynamo-db-stream-event-filter-for-a-field-from-array-of-objects. Here's what I've tried:
```
"FOO": {
"SS": {
"anything-but": ["x","y","z"]
}
}
```
Can anyone advise on what the filter pattern should look like?
Hi,
I followed a youtube video and setup a OPENVPN EC2 and tunneled my home network through that and it was working fine.
Now after a month later, the VPN server is still running fine and I am seeing payment amount increasing and forcasted for next month but when I log into my aws account and go to EC2 I don't see any instances running. 0 instance. but the VPN is working fine.
So, I wonder how to be sure that it is my VPN is what I am using and not a hacker's VPN now. And why the bill is adding up?
Any help for this novice will be appreciated.
Thanks,
Repost
I tried to find a solution somewhere but didn't find a response for my case.
I already have a Compute Environment, Job Queue, and Job Definition created with the required configuration.
I can successfully submit a job manually, and it works as wanted.
My Job Queue and Compute Environment go DISABLED automatically when they are Idle, I think that's how AWS Batch works to optimize costs (maybe ?)
I configured a rule (cron) in EventBridge to submit a job (using the job queue, and job definition mentioned above), and it works fine, but I have to ENABLE manually the Compute Environment and Job Queue every time (which is not something I wanted), I thought of creating another rule in EventBridge to run a lambda function that enables my resources before submitting the job, but I think that is overengineered for such a simple task, I think I'm missing something here, can you give me suggestions, or correct me if I'm missing something in this simple use case? Thanks!
Our SageMaker Studio service is broken in one of our AWS accounts in some deep way. Our original domain encountered this issue of "Update_Failed" when attempting to attach a new custom docker image. Using describe-domain via the CLI, we see that the "FailureReason" is just "InternalFailure". This issue also somehow effects brand new, entirely separate SageMaker Studio domains that we create. This is only an issue in our one (data science development) AWS account. Repeating the process in other accounts works as expected.
Hello there AWS team!
I'm looking around for the correct way to provision my devices to AWS IoT core.
It seems provision by claim can do the trick, but I'm using ESP32 with the Arduino platform. That means I don't have access to ESP-IDF.
It is possible to do provision by claim in the Arduino environment? if so, can you share me the link or documentation about it
Thanks a lot in advance :)
Hi team,
we are working in accelerator account AWS ASEA, that has no outbound connectivity
we can not connect to internet to download anything (libraries, ....)
the VPC is private only.
our task is to fetch data from twitter and do
- twitter data processing
- sentiment Analysis
..
we would like to know **if there is a way to achieve this when our account doesn't have outbound(internet) connectivity**?
could you please advice best practices/architecture to do this scenarios (twitter data processing, sentiment Analysis) ?
Thank you
I created instance with LAMP(PHP 7) few days ago, when I try to create another LAMP(PHP 7) blueprint instance. I only can select LAMP(php8) now.
I am hosting a couple of websites on Elastic Beanstalk. AWS India now only accepts manual payments at the end of each month. They don't save credit cards and Netbanking payment must be manually approved each time.
Due to health issues I am frequently in and out of hospital; last time I was admitted for 2 weeks and when I came back, I found my account got suspended for non payment (and the sites went down).
This defeats the very purpose of cloud computing if I have to be constantly around for the bill.
Is there any way to automate payments?
I tried to get Activate credits but that too got rejected, so I am left with no option now except to make manual payment each time.
If automated payments are not possible, some recommendations on alternative providers will be helpful.
We using Amazon Personalize to build some real-time recommendation model. We used the explicit impression data. A quick question we have is: when we are using the PutEvent to record the live Event, will Amazon Personalize adjust the recommendation immediately and demote or filter out those items that are not interacted in the impression data? In short, does sending impressions with PutEvents affect the recommendation for the same user immediately?
const crypto = require('crypto');
class AwsV4 {
constructor(accessKeyID, secretAccessKey) {
this.accessKeyID = accessKeyID;
this.secretAccessKey = secretAccessKey;
this.currentDateObject = new Date();
this.xAmzDate = this.getTimeStamp(this.currentDateObject);
this.currentDate = this.getDate(this.currentDateObject);
}
setPath(path) {
this.path = path;
}
setServiceName(serviceName) {
this.serviceName = serviceName;
}
setRegionName(regionName) {
this.regionName = regionName;
}
setPayload(payload) {
this.payload = payload;
}
setRequestMethod(method) {
this.httpMethodName = method;
}
addHeader(headerName, headerValue) {
this.awsHeaders = this.awsHeaders || {};
this.awsHeaders[headerName] = headerValue;
}
prepareCanonicalRequest() {
let canonicalURL = '';
canonicalURL += this.httpMethodName + '\n';
canonicalURL += this.path + '\n';
// Add the missing line to include the CanonicalQueryString
canonicalURL += '' + '\n'; // Use an empty string as there are no query string parameters in this case
let signedHeaders = '';
// Add x-amz-date header
this.addHeader('x-amz-date', this.xAmzDate);
// Sort headers lexicographically by header name (lowercase)
const sortedHeaderKeys = Object.keys(this.awsHeaders).sort((a, b) => a.toLowerCase().localeCompare(b.toLowerCase()));
for (const key of sortedHeaderKeys) {
if (key !== 'Accept' && key !== 'Accept-Language' && key !== 'Content-Type') {
signedHeaders += key.toLowerCase() + ';';
canonicalURL += key.toLowerCase() + ':' + this.awsHeaders[key] + '\n';
}
}
canonicalURL += '\n';
this.strSignedHeader = signedHeaders.slice(0, -1);
canonicalURL += this.strSignedHeader + '\n';
canonicalURL += this.generateHex(this.payload);
return canonicalURL;
}
prepareStringToSign(canonicalURL) {
let stringToSign = '';
stringToSign += 'AWS4-HMAC-SHA256' + '\n';
stringToSign += this.xAmzDate + '\n';
stringToSign += this.currentDate + '/' + this.regionName + '/' + this.serviceName + '/' + 'aws4_request' + '\n';
stringToSign += this.generateHex(canonicalURL);
return stringToSign;
}
calculateSignature(stringToSign) {
const signatureKey = this.getSignatureKey(this.secretAccessKey, this.currentDate, this.regionName, this.serviceName);
const signature = crypto.createHmac('sha256', signatureKey).update(stringToSign).digest('hex');
return signature;
}
getHeaders() {
const canonicalURL = this.prepareCanonicalRequest();
const stringToSign = this.prepareStringToSign(canonicalURL);
const signature = this.calculateSignature(stringToSign);
const authorizationHeader = this.buildAuthorizationString(signature);
this.awsHeaders['Authorization'] = authorizationHeader;
this.awsHeaders['x-amz-date'] = this.xAmzDate;
return this.awsHeaders;
}
getUpdatedHeaders() {
this.setPath('/paapi5/getitems');
this.setServiceName('ProductAdvertisingAPI');
this.setRegionName('us-east-1');
this.setRequestMethod('POST');
this.setPayload(payloadJsonString); // Use the actual payload JSON string
this.addHeader('Host', 'webservices.amazon.com');
this.addHeader('Content-Encoding', 'amz-1.0');
this.addHeader('Content-Type', 'application/json; charset=UTF-8');
this.addHeader('x-amz-date', this.xAmzDate); // Move this line up
this.addHeader('X-Amz-Target', 'com.amazon.paapi5.v1.ProductAdvertisingAPIv1.GetItems');
const headers = this.getHeaders();
return {
'Authorization': headers['Authorization'],
'X-Amz-Date': headers['x-amz-date']
};
}
buildAuthorizationString(signature) {
return 'AWS4-HMAC-SHA256' + ' ' + 'Credential=' + this.accessKeyID + '/' + this.getDate(this.currentDateObject) + '/' + this.regionName + '/' + this.serviceName + '/' + 'aws4_request' + ' ' + 'SignedHeaders=' + this.strSignedHeader + ' ' + 'Signature=' + signature;
}
generateHex(data) {
return crypto.createHash('sha256').update(data).digest('hex');
}
getSignatureKey(key, date, regionName, serviceName) {
const kSecret = 'AWS4' + key;
const kDate = crypto.createHmac('sha256', kSecret).update(date).digest();
const kRegion = crypto.createHmac('sha256', kDate).update(regionName).digest();
const kService = crypto.createHmac('sha256', kRegion).update(serviceName).digest();
const kSigning = crypto.createHmac('sha256', kService).update('aws4_request').digest();
return kSigning;
}
getTimeStamp(date) {
return date.toISOString().replace(/[:-]|\.\d{3}/g, '');
}
getDate(date) {
const year = date.getUTCFullYear();
const month = ('0' + (date.getUTCMonth() + 1)).slice(-2);
const day = ('0' + date.getUTCDate()).slice(-2);
return `${year}${month}${day}`;
}
}
const awsV4 = new AwsV4('AKIAI6QL7ST37VECNI7A', 'ZnZS++sxYuDGxP8VOSEG2uZd8Qmtup9F51wHgOkw');
const payload = {
"ItemIds": [
"B01M6V8CP4"
],
"Resources": [
"CustomerReviews.Count",
"CustomerReviews.StarRating",
"Images.Variants.Large",
"ItemInfo.Features",
"Offers.Listings.Promotions",
"Offers.Summaries.LowestPrice"
],
"PartnerTag": "timstools03-20",
"PartnerType": "Associates",
"Marketplace": "www.amazon.com"
};
const payloadJsonString = JSON.stringify(payload);
// Pass the JSON string to setPayload()
awsV4.setPayload(payloadJsonString);
const updatedHeaders = awsV4.getUpdatedHeaders();
console.log(updatedHeaders);