Questions tagged with Management & Governance
Content language: English
Sort by most recent
Hi,
we configured SSO for QuickSight and followed the instructions in this blog:
https://aws.amazon.com/de/blogs/big-data/enable-federation-to-amazon-quicksight-with-automatic-provisioning-of-users-between-aws-iam-identity-center-and-microsoft-azure-ad/
However, in this article every user will be an admin, because https://aws.amazon.com/SAML/Attributes/Role will always be mapped to arn:aws:iam:: <YourAWSAccount ID>:role/QuickSight-Admin-Role - the role does not depend on the user group.

As described in the article, we created 3 IAM roles and Azure AD groups (Admin, Author, Reader). How can we assign IAM roles to the AD group? We already tried using claims in Azure AD, as described here: https://aws.amazon.com/de/blogs/big-data/enabling-amazon-quicksight-federation-with-azure-ad/
Hello aws re:Post
I want to run my pods (network wise) in a different subnet and for that I make use of the custom CNI config for the AWS-CNI plugin which already works like a charm.
Now I want to automate the whole process.
I already archived to create the CRD eniconfigs and deploy them automatically. But now I stuck at the automation of the node annotation. As I could not find any useful content while searching re:Post or the internet, I assume the solution is rather simple.
I assume that the solution is somewhere here in the Launch Template, User Data or via `KUBELET_EXTRA_ARGS` but I'm just guessing.
**The Question**
How can I provide annotations like mine (below) to the nodes on launch or after they joined the cluster automatically?
```
kubectl annotate node ip-111-222-111-222.eu-central-1.compute.internal k8s.amazonaws.com/eniConfig=eu-central-1c
```
I was just wondering if someone could help with a comparison of these services. Except for the fact that they create diagrams, the documentation for both is quite vague. Clearly there is a pricing model difference (AWS resource costs vs a per-user cost), but it's the capabilities I'm more interested in.
Thanks!
```
{
"eventVersion": "1.08",
"userIdentity": {
"type": "IAMUser",
"principalId": "AIDAV4B5HOXQNKHTNXV6O",
"arn": "arn:aws:iam::403855341000:user/rahul.shah",
"accountId": "403855341000",
"accessKeyId": "ASIAV4B5HOXQF7USP2V4",
"userName": "rahul.shah",
"sessionContext": {
"sessionIssuer": {},
"webIdFederationData": {},
"attributes": {
"creationDate": "2022-08-23T04:50:55Z",
"mfaAuthenticated": "true"
}
}
},
"eventTime": "2022-08-23T08:34:31Z",
"eventSource": "s3.amazonaws.com",
"eventName": "CreateBucket",
"awsRegion": "us-east-1",
"sourceIPAddress": "103.108.207.58",
"userAgent": "[S3Console/0.4, aws-internal/3 aws-sdk-java/1.11.1030 Linux/5.4.204-124.362.amzn2int.x86_64 OpenJDK_64-Bit_Server_VM/25.302-b08 java/1.8.0_302 vendor/Oracle_Corporation cfg/retry-mode/standard]",
"requestParameters": {
"bucketName": "rahul-test-1",
"Host": "s3.amazonaws.com",
"x-amz-object-ownership": "BucketOwnerEnforced"
},
"responseElements": null,
"additionalEventData": {
"SignatureVersion": "SigV4",
"CipherSuite": "ECDHE-RSA-AES128-GCM-SHA256",
"bytesTransferredIn": 0,
"AuthenticationMethod": "AuthHeader",
"x-amz-id-2": "XaSiP6kwzBYfi8KGWMNM4DQy31Lce6qRBVc+gbD/rXg7W53uzT5Q1fmo6tL0f/yj9mFTk8eZQYQ=",
"bytesTransferredOut": 0
},
"requestID": "0WKZRVANGE15WRYG",
"eventID": "d89d952c-68b8-4c39-bdd1-67b6b92e0b4f",
"readOnly": false,
"eventType": "AwsApiCall",
"managementEvent": true,
"recipientAccountId": "403855341000",
"vpcEndpointId": "vpce-f40dc59d",
"eventCategory": "Management",
"tlsDetails": {
"tlsVersion": "TLSv1.2",
"cipherSuite": "ECDHE-RSA-AES128-GCM-SHA256",
"clientProvidedHostHeader": "s3.amazonaws.com"
}
}
```
There are events which does not give resource
* null is obtained in `responseElements`
* actual resource arn won't be available in `requestParameters`
Is there any way to get actual resource in above types of scenarios?
Our environment manages most of the infrastructure component changes via code. Additionally, certain administrators have access to make infrastructure changes to applicable resources directly via console (although direct changes are not encouraged or frequently made). Our auditors are indicating that as administrators have access to make direct infrastructure changes via console, they cannot rely on the population of infrastructure changes made via code (obtained from Git), and also want us to evidence that no direct infrastructure changes were made by administrators directly from the console. I wanted to understand the following:
1. How can we obtain population of direct infrastructure changes made from the console? We are thinking of obtaining the cloudtrail logs, but is there any other efficient way to obtain this population?
2. Also, does it make sense to restrict administrator access to prevent from making any direct infrastructure changes? What is the industry standard in terms of restricting administrator access?
3. Are there any other ways to evidence that no direct infrastructure changes were made outside of code changes?
We are creating the ECS cluster through cloud-formation in production. we want to increase the Minimum and desired count for the ECS service through AWS SDK ECS client api calls.
I want to confirm is there any issue or security concerns to increase the task count through AWS SDK ECS client API instead of Cloud-formation deployment?
Hello
In my team we provide CloudFormation templates and CDK Constructs for other teams in our organization to use. We want to track where and what version of our templates and constructs are used across multiple accounts. We are currently setting tags in our templates and constructs but compiling the information has turned out more difficult. We have looked at AWS Config but the Advanced Queries doesn't allow querying on tags nor does it seem to allow me to query on all resource types, for example I can't seem to query for any types of ECS resources which would be relevant for us.
Is there a good way to track this usage?
Thank you
As already pointed out in the old [Developer Forum](https://forums.aws.amazon.com/thread.jspa?messageID=812431&tstart=0#812431) CodeDeploy B/G deploy is failing if the AutoScalingGroup has some AutoScalingPolicy attached because it looses the attachment to the original TargetGroup.
Has anybody found a better solution than [manually re-attaching it](https://dev.to/cvortmann/fixing-aws-codedeploy-issue-where-auto-scaling-group-is-not-attached-to-target-group-47ac)?
Attaching the policies instead of the target is also a possibility, but it makes the management of the policies not in CloudFormation. BTW, the old workaround only worked with "classic LB", it can't be implemented with current ALBs.
It is "funny" that AWS is aware of this severe bug but did not fixed it yet, after 5 years ;(
Hello! I am looking for an equivalent to this solution that MIcrosoft has flaunted called IDP intelligent data platform, it is governance + operations + analytics in one. they flaunt synapse with aml and purview and other stuff they did with these to make it more integrated . I know we have RDS and Sagemaker -- but how about purview? how are we tgo make it more cohesive?
Hi, Does anyone use PyDeequ for large enterprises. I am exploring this library and have the below questions:
1) Looking at the github repo it doesnt seem like it is actively udated. ALso, it supoorts Spark 3.0.0 but not later versions.
2) Some of the apis didnt work(for complex examples). I dont know if there is any Amazon support.
3) Also the scala version(deequ) is more up to date than the python version(PuDeequ). s is there a plan to sunset the PyDeequ version
4) Should I use this for large enterprise data validation framework or there are any other alternate tools. Kindly advise.
Thank you!
I am using the AWS Config Service across multiple Accounts within my Organization. My goal is to write a query which will give me a full list of non-compliant resources in all regions, in all accounts. I have an Aggregator which has the visibility for this task. The Advanced Query I am using is similar to the AWS [Example in the docs:](https://docs.aws.amazon.com/config/latest/developerguide/example-query.html)
```
SELECT
configuration.targetResourceId,
configuration.targetResourceType,
configuration.complianceType,
configuration.configRuleList,
accountId,
awsRegion
WHERE
configuration.configRuleList.complianceType = 'NON_COMPLIANT'
```
However, the ConfigRuleName is nested within `configuration.configRuleList` - as there could be multiple config rules, (hence the list) assigned to `configuration.targetResourceId`
How can I write a query that picks apart the JSON list returned this way? Because the results returned do not export to csv for example very well at all. Exporting a JSON object within a csv provides an unsuitable method if we wanted to import this into a spreadsheet for example, for viewership.
I have tried to use `configuration.configRuleList.configRuleName` and this only returns `-` even when the list has a single object within. If there is a better way to create a centralised place to view all my Org's Non-Compliant Resources, I would like to learn about it.
Thanks in Advance.
Recently I worked with a project that needs some EC2 servers and RDS for the web applications. After creating all infrastructure resources, I have to create CloudWatch alarms for all EC2 and RDS instances, then I found I have to create hundreds of alarms in CloudWatch which is a nightmare for me!
In order to make life easier, I developed a solution for this situation, for details please refer to the following link:
https://github.com/jayhebe/cloudwatch_alarm_generator
Hope this is helpful and I know there are a lot of limitations and as a beginner of programming, the code is not perfect as well, but I am still working on that iteratively to make it better.
Let me know if you have any question :)
Br
Jay