By using AWS re:Post, you agree to the Terms of Use
/Security/

Questions tagged with Security

Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

[Urgent Action Required] - Upgrade your RDS for PostgreSQL minor versions

This announcement is for customers that are running one or more Amazon RDS DB instances with a version of PostgreSQL, that has been deprecated by Amazon RDS and requires attention. The RDS PostgreSQL minor versions that are listed in the table below are supported, and any DB instances running earlier versions will be automatically upgraded to the version marked as "preferred" by RDS, no earlier than July 15, 2022 starting 12 AM PDT: | Major Versions Supported | Minor Versions Supported | | --- | --- | | 14 | 14.1 and later | | 13 |13.3 and later | | 12 | 12.7 and later | | 11 |11.12 and later | | 10 |10.17 and later| | 9 |none | Amazon RDS supports DB instances running the PostgreSQL minor versions listed above. Minor versions not included above do not meet our high quality, performance, and security bar. In the PostgreSQL versioning policy [1] the PostgreSQL community recommends that you always run the latest available minor release for whatever major version is in use. Additionally, we recommend that you monitor the PostgreSQL security page for documented vulnerabilities [2]. If you have automatic minor version upgrade enabled as a part of your configuration settings, you will be automatically upgraded. Alternatively, you can take action yourselves by performing the upgrade earlier. You can initiate an upgrade by going to the Modify DB Instance page in the AWS Management Console and change the database version setting to a newer minor/major version of PostgreSQL. Alternatively, you can also use the AWS CLI to perform the upgrade. To learn more about upgrading PostgreSQL minor versions in RDS, review the 'Upgrading Database Versions' page [3]. The upgrade process will shutdown the database instance, perform the upgrade, and restart the database instance. The DB instance may restart multiple times during the process. If you choose the "Apply Immediately" option, the upgrade will be initiated immediately after clicking on the "Modify DB Instance" button. If you choose not to apply the change immediately, the upgrade will be performed during your next maintenance window. Starting no earlier than July 15, 2022 12 AM PDT, we will automatically upgrade the DB instances running deprecated minor version to the preferred minor version of the specific major version of your RDS PostgreSQL database. (For example, instances running RDS PostgreSQL 10.1 will be automatically upgraded to 10.17 starting no earlier than July 15, 2022 12 AM PDT) Should you need to create new instances using the deprecated version(s) of the database, we recommend that you restore from a recent DB snapshot [4]. You can continue to run and modify existing instances/clusters using these versions until July 14, 2022 11:59 PM PDT, after which your DB instance will automatically be upgraded to the preferred minor version of the specific major version of your RDS PostgreSQL database. Starting no earlier than July 15, 2022 12 AM PDT, restoring the snapshot of a deprecated RDS PostgreSQL database instance will result in an automatic version upgrade of the restored database instance using the same upgrade process as described above. Should you have any questions or concerns, please see the RDS FAQs [5] or you can contact the AWS Support Team on the community forums and via AWS Support [6]. Sincerely, Amazon RDS [1] https://www.postgresql.org/support/versioning/ [2] https://www.postgresql.org/support/security/ [3] http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.PostgreSQL.html [4] https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_RestoreFromSnapshot.html [5] https://aws.amazon.com/rds/faqs/ [search for "guidelines for deprecating database engine versions"] [6] https://aws.amazon.com/support
0
answers
1
votes
4
views
AWS-User-8019255
asked 9 days ago

Adding MFA to Workspaces "failed" problem

I have been attempting to add Mult-Factor Authentication to my workspaces account for my user base. I have configured the radius server using Free Radius from this post here: https://aws.amazon.com/blogs/desktop-and-application-streaming/integrating-freeradius-mfa-with-amazon-workspaces/ and all goes according to plan. I have the FreeRadius server using LinOTP running. The problem is in the very last step, when I go to enable MFA in workspace , I put in the information and it just says "failed". Specifically, Step 6: Enable MFA on your AWS Directory Communication between the AWS Managed Microsoft AD RADIUS client and your RADIUS server require you to configure AWS security groups that enable communication over port 1812. Edit your Virtual Private Cloud (VPC) security groups to enable communications over port 1812 between your AWS Directory Service IP end points and your RADIUS MFA server. Navigate to your Directory Service console Click the Directory you want to enable MFA on. Select Network & Security tab, scroll down to Multi-factor authentication, click Actions and Enable. In Enable multi-factor authentication (MFA) configure MFA settings: Display label: Example RADIUS server IP address(es): Private IP of the Amazon Linux 2 instance Port: 1812 Shared secret code: the one set in /etc/raddb/clients.conf Confirm shared secret code: as preceding Protocol: PAP Server timeout (in seconds): 30 Max retries: 3 This operation can take between 5-10mins to complete. Once the Radius status is “completed” you can test MFA authentication from the WorkSpace client. I really have two questions: 1. How do I do this part? Edit your Virtual Private Cloud (VPC) security groups to enable communications over port 1812 between your AWS Directory Service IP end points and your RADIUS MFA server. Maybe I'm not setting up the endpoints correctly ? Do I go to the VPC and add endpoints there? CAn you pleae be specific. 2. How do I get more information from just the "failed" in red --- how do I access the creation logs? Thanks in advance, Jon
1
answers
0
votes
3
views
AWS-User-6508273
asked 21 days ago

Security group appears to block certain ports after google-authenticator mis-entries

I run a small server providing web and mail services with a public address. I was planning on upgrading from a t2 small to a t3 small instance so I began testing the new environment using ubuntu 20.04. The new instance is running nginx, postfix, dovecot and has ports 22,25,80,443,587 and 993 open through two security groups assigned. I wanted to test a user which used only google-authenticator with pam/sshd to log in (no pubkey, no password). What I discovered was that after two sets of failed login attempts (intentional), my connection to the server would be blocked and I would receive a timed out message. Checking the port status with nmap shows that ports 22,80 and 443 were closed. and the remaining still open. I can still reach all the ports normally from within my vpc, but from outside, the ports are blocked. Restarting the instance or reassigning the security groups will fix the problem. Also, after about 5 minutes, the problem resolves itself. It appears that the AWS security group is the source of the block, but I can find no discussion of this type of occurrence. This isn't critical, but a bit troubling, because it opens a route for malicious actions that could block access to my instance. I have never experienced anything like this in about 7 years of running a similar server, though I never used google-authenticator with pam/sshd before. Do you have any ideas? I'd be happy to provide the instance id and security groups if needed.
1
answers
0
votes
5
views
AWS-User-2666223
asked a month ago

Unauthorized AWS account racked up charges on stolen credit card.

My mother was automatically signed up for an AWS account or someone used her credentials to sign up. She did not know that she had been signed up, and it sat unused for 3 years. Last month, she got an email from AWS for "unusual activity" and she asked me to help her look into it. Someone racked up $800+ in charges in 10 days for AWS services she has never heard of, let alone used (SageMaker, LightSail were among the services). The card on the AWS account is a credit card that was stolen years ago and has since been cancelled. So when AWS tried to charge the card, it didn't go through. My experience with AWS customer service has been unhelpful so far. Mom changed her AWS password in time so we could get into the account and contact support. I deleted the instances so that the services incurring charges are now stopped. But now AWS is telling me to put in a "valid payment method" or else they will not review the fraudulent bill. They also said that I have to set up additional AWS services (Cost Management, Amazon Cloud Watch, Cloud Trail, WAF, security services) before they'll review the bill. I have clearly explained to them that this entire account is unauthorized and we want to close it ASAP, so adding further services and a payment method doesn't make sense. Why am I being told to use more AWS services when my goal is to use zero? Why do I have to set up "preventative services" when the issue I'm trying to resolve is a PAST issue of fraud? They also asked me to write back and confirm that we have read and understood the AWS Customer Agreement and shared responsibility model." Of course we haven't, because we didn't even know the account existed! Any advice or input into this situation? It's extremely frustrating to be told that AWS won't even look into the issue unless I set up these additional AWS services and give them a payment method. This is a clear case of identity fraud. We want this account shut down. Support Case # is xxxxxxxxxx. Edit- removed case ID -Ann D
1
answers
0
votes
13
views
AWS-User-5400487
asked a month ago

Redshift Clear Text Passwords and Secret keys exposed?

Hi there, I received the following email about my redshift cluster: > We are reaching out to inform you your Amazon Redshift cluster(s) may have been affected by an issue caused by a change introduced on October 13, 2021, where your password and/or your Secret_Access_Key may have been inadvertently written in plain text to your cluster's audit logs (stl_user_activity_log). We do not have any indication that these credentials have been accessed. We applied a patch on January 19, 2022, to fix the issue for all clusters in all AWS regions. > As a cautionary measure, we recommend that you: (1) Review any access to your cluster(s) in your audit log files from October 13, 2021 through January 19, 2022, such as those by authorized applications, to ensure your access credentials and passwords were not accessed; (2) Immediately change your cluster's password and/or generate a new Secret_Access_Key for use with COPY and UNLOAD commands for moving files between Amazon S3 and Amazon Redshift; and (3) Scan and sanitize your audit log files, that were created between October 13, 2021 through January 19, 2022, both dates inclusive, to remove any occurrences of clear text passwords and security keys in them. However, looking on my cluster I can't see a stl_user_activity_log > Select * from stl_user_activity_log; > SQL Error [42P01]: ERROR: relation "pg_catalog.stl_user_activity_log" does not exist Was this email pointing out the wrong audit logs? or should I not be looking for these audit logs on the table? we have s3 audit logging enabled, but browsing through those I don't see anything either.
1
answers
0
votes
8
views
AWS-User-5958751
asked 2 months ago

EC2 instance can’t access the internet

Apparently, my EC2 instance can’t access the internet properly. Here is what happens when I try to install a Python module: `[ec2-user@ip-172-31-90-31 ~]$ pip3 install flask` `Defaulting to user installation because normal site-packages is not writeable` `WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x7fab198cbe10>: Failed to establish a new connection: [Errno 101] Network is unreachable')': /simple/flask/` etc. Besides, inbound ping requests to instances the Elastic IP fail (Request Timed Out). However, the website that is hosted on the same EC2 instance can be accessed using both http and https. The security group is configured as follows: the inbound rules are | Port range | Protocol | Source | | -------- | -------- | ---- | | 80 | TCP |0.0.0.0/0 | | 22 | TCP |0.0.0.0/0 | | 80 | TCP |::/0 | | 22 | TCP |::/0 | | 443 | TCP |0.0.0.0/0 | | 443 | TCP |::/0 | the outbound rules are | IP Version | Type | Protocol | Port range | Source | | ----------- | --------- | -------- | ------- | ------ | | IPv4 | All traffic | All | All | 0.0.0.0/0 | The ACL inbound rules are: | Type | Protocol | Port range | Source | Allow/Deny | | -------- | -------- | ---- | -------- | ---------- | | HTTP (80) | TCP (6) | 80 |0.0.0.0/0 | Allow | | SSH (22) | TCP (6) | 22 |0.0.0.0/0 | Allow | | HTTPS (443)| TCP (6) | 443 |0.0.0.0/0 | Allow | | All ICMP - IPv4 | ICMP (1) | All | 0.0.0.0/0 | Allow | | All trafic | All | All |0.0.0.0/0 | Deny | and the outbound rules are: | Type | Protocol | Port range | Source | Allow/Deny | | -------- | -------- | ------- | -------- | ---------- | | Custom TCP | TCP (6) | 1024 - 65535 | 0.0.0.0/0 | Allow | | HTTP (80) | TCP (6) | 80 |0.0.0.0/0 | Allow | | SSH (22) | TCP (6) | 22 |0.0.0.0/0 | Allow | | HTTPS (443) | TCP (6) | 443 |0.0.0.0/0 | Allow | |All ICMP - IPv4 | ICMP (1) | All | 0.0.0.0/0 | Allow | | All trafic | All | All |0.0.0.0/0 | Deny | This is what the route table associated with the subnet looks like: | Destination | Target | Status | Propagated | | ---------- | -------- | -------- | ---------- | | 172.31.0.0/16 | local | Active | No | | 0.0.0.0/0 | igw-09b554e4da387238c | Active | No | (no explicit or edge associations). As for the firewall, executing `sudo iptables –L` results in `Chain INPUT (policy ACCEPT)` `target prot opt source destination` `Chain FORWARD (policy ACCEPT)` `target prot opt source destination` `Chain OUTPUT (policy ACCEPT)` `target prot opt source destination` and `sudo iptables -L -t nat` gives `Chain PREROUTING (policy ACCEPT)` `target prot opt source destination` `Chain INPUT (policy ACCEPT)` `target prot opt source destination` `Chain OUTPUT (policy ACCEPT)` `target prot opt source destination` `Chain POSTROUTING (policy ACCEPT)` `target prot opt source destination` What am I missing here? Any suggestions or ideas on this would be greatly appreciated. Thanks
2
answers
0
votes
14
views
AWS-User-9646998
asked 2 months ago
1
answers
0
votes
6
views
AWS-User-9646998
asked 2 months ago

What is Best Practice configuration for a SECURE single user WorkSpaces VPC?

I am a one-person business looking to set up a simple VPC for access to a virtual Windows desktop when I travel from the US to Europe. My trips are 1-3 months in duration, and I'd like to carry just my iPad or a Chromebook rather than a full laptop. This is easier and more secure if my desktop is in the AWS cloud. I am a bit of a network novice and my prior experience with AWS has been only with S3 buckets. From reading the AWS docs, I have learned how to create a VPC, with subnets and a Simple AD. I can spin up a workspace and access it. However, I am unsure about what additional steps, if any, I should take to *secure* my WorkSpaces environment. I am using public subnets without a NAT Gateway, because I only need one workspace image and would like to avoid paying $35+ per month for the NAT just to address one image. I know that one of the side benefits of using a NAT Gateway is that I get a degree of isolation from the Internet because any images behind a NAT Gateway would not be directly reachable from the Internet. However, in my case, my workspace image has an assigned IP and is *not* behind a NAT Gateway. My questions are: 1. Am I taking unreasonable risks by placing my WorkSpaces in a public subnet, i.e., by not using a NAT Gateway? 2. Should I restrict access using Security Group rules, and if so, how? 3. Are there other steps I should take to improve the security of my VPC? I want to access my WorkSpace using an iPad, so I can't use certificate-based authentication. I don't know if I could easily use IP restriction, because I don't know in advance the IP range I would be in when I travel. PLUS, as you can probably tell, I'm confused about what I need to secure - the workspace image, my Simple Directory instance, or both? I'm having a hard time finding guidance in the AWS documentation, because much of the docs are oriented toward corporate use cases, which is understandable. The "getting started" documentation is excellent but doesn't seem to touch on my questions. Thanks in advance for any answers or documentation sources you provide!
3
answers
0
votes
10
views
AWS-User-8794650
asked 3 months ago

Permissions for IoT Things and Cognito User/Identity Pools

Hello, I am having some issues architecting a good security scheme for managing IoT Thing access for Cognito users. My use case is the following: * We have a number of users (corresponding to users in a User Pool, with an associated Identity Pool). Each user belongs to a particular "Company". Currently this is done via an attribute (`custom:Company`). * We have a number of IoT Things. Each of these Things belongs to a static Thing Group, whose name matches the attribute above. I'd like for a given User/Identity to be able to receive the MQTT data stream from Things that belong to a static group that matches their custom:Company attribute. Example: * I have 6 Things: A, B, C, D, E, F. * A, B & C belong to static group "FirstCompany" * D, E & F belong to static group "SecondCompany" * I have two cognito users/identities: Alice and Bob. * Alice has the custom attribute `custom:Company` = FirstCompany * Bob has the custom attribute `custom:Company` = SecondCompany I'd like for Alice to be able to subscribe to the MQTT topics for devices A, B and C, but NOT D, E and F. This means permissions for iot:Connect, iot:Receive, iot:Publish and iot:Subscribe. The pseudo-policy I'd like to assign to all users is something like this: ``` effect = allow, action = ["iot:Receive", ...] condition: target thing group == ${aws:PrincipalTag/custom:Company} ``` Unfortunately I haven't found something as straight-forward as this. As I see it, my options are: 1. Draft custom policies for each customer, in which each Thing (and associated topics) are explicitly allowed. This seemingly wouldn't scale well if a customer has thousands of Things. 2. Create a custom IoT authorizer that compares the principal's attribute with the Thing's static group. This seems like it'd run into rate limiting issues, especially if I have to check which groups a Thing belongs to for every MQTT message. 3. Come up with a naming scheme for devices that includes a customer name in some way (e.g., instead of A, B, C I'd have FirstCustomer-A, FirstCustomer-B, FirstCustomer-C). This doesn't feel like a great approach. However, it seems like this situation would be pretty common! Is there a particular way this should be done? Any guidance would be appreciated! -------------- Edit: Following up on the suggestion from Pronoy_C, I've set up the following IoT Core Policy: ``` { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "iot:Connect", "Resource": "*" }, { "Condition": { "StringEquals": { "cognito-identity.amazonaws.com:sub": "${iot:Connection.Thing.Attributes[Owner]}" } }, "Effect": "Allow", "Action": "iot:*", "Resource": "arn:aws:iot:us-east-1:xxxxxxxxxxxx:thing/*" } ] } ``` and I've attached this policy both to Thing A and to Alice's Identity. However, while Alice is able to connect to the MQTT host, I cannot publish to `$aws/things/A/shadow/get`. The AWS logs indicate AUTHORIZATION_FAILURE. I do indeed have the Owner attribute set to Alice's Identity ID. I tried testing with the CLI tool, but have run into issues there (see [this thread](https://repost.aws/questions/QUIy1VujDmTvO-3Il99dw-xQ/aws-io-t-test-authorization-missing-context-values)).
3
answers
1
votes
8
views
AWS-User-2848082
asked 3 months ago

Python lambda failing to initialize RSA public key occasionally

I'm trying to create a custom request authorizer working with several user pools, in Python. So to validate tokens I tried first with pyjwk/cryptography ``` claims = jwt.decode(token, options={"verify_signature": False, "require":["iss"]}) issuer = claims['iss'] jwks_client = PyJWKClient(issuer+"/.well-known/jwks.json",False) signing_key = jwks_client.get_signing_key_from_jwt(token) ``` Occasionally, about 5% of the time, lambda instance will just timeout on this last line, even with 30 second run time. Thought maybe it was network, rewrote it to get the JWK through requests and initialize the key with RSAAlgorithm.from_jwk, nope - the JWK is retrieved, but it's initializing the key that fails. Called RSAAlgorithm.from_jwk outside the handle method with dummy hardcoded JWK to move initialization of cryptography to init stage; handler method works smoother now, instead of being slow on the first invocation, but the random failure still happens. Thought maybe it was cryptography or pyjwk, switched to python-jose and it's different backends. Nope - still fails in loading the key, now written as jwk.construct(). What is causing this strange and random behavior? An instance that failed once stays permanently broken and doesn't recover in the next request. On the logs there's nothing, although such broken instances drop the memory usage. Here are first two requests from broken and working instances running the same image, same time, for same user pool key. Broken: 2022-02-14T17:38:17.185+02:00 START RequestId: d9d92287-ccae-4aa4-8f94-6e8ed8c276a4 Version: $LATEST 2022-02-14T17:38:17.205+02:00 [DEBUG] 2022-02-14T15:38:17.205Z d9d92287-ccae-4aa4-8f94-6e8ed8c276a4 Starting new HTTPS connection (1): cognito-idp.us-east-1.amazonaws.com:443 2022-02-14T17:38:20.190+02:00 END RequestId: d9d92287-ccae-4aa4-8f94-6e8ed8c276a4 2022-02-14T17:38:20.190+02:00 REPORT RequestId: d9d92287-ccae-4aa4-8f94-6e8ed8c276a4 Duration: 3003.51 ms Billed Duration: 3000 ms Memory Size: 128 MB Max Memory Used: 53 MB Init Duration: 467.93 ms 2022-02-14T17:38:20.190+02:00 2022-02-14T15:38:20.189Z d9d92287-ccae-4aa4-8f94-6e8ed8c276a4 Task timed out after 3.00 seconds 2022-02-14T17:38:20.706+02:00 START RequestId: a5242265-c13d-4015-9b7d-2699f0b26efe Version: $LATEST 2022-02-14T17:38:20.709+02:00 [DEBUG] 2022-02-14T15:38:20.709Z a5242265-c13d-4015-9b7d-2699f0b26efe Starting new HTTPS connection (1): cognito-idp.us-east-1.amazonaws.com:443 2022-02-14T17:38:23.712+02:00 END RequestId: a5242265-c13d-4015-9b7d-2699f0b26efe 2022-02-14T17:38:23.712+02:00 REPORT RequestId: a5242265-c13d-4015-9b7d-2699f0b26efe Duration: 3004.51 ms Billed Duration: 3000 ms Memory Size: 128 MB Max Memory Used: 23 MB 2022-02-14T17:38:23.712+02:00 2022-02-14T15:38:23.711Z a5242265-c13d-4015-9b7d-2699f0b26efe Task timed out after 3.00 seconds Working: 2022-02-14T17:38:23.733+02:00 START RequestId: 2ea5db18-c9b5-4df8-b3ef-dfc01f9ede00 Version: $LATEST 2022-02-14T17:38:23.740+02:00 [DEBUG] 2022-02-14T15:38:23.739Z 2ea5db18-c9b5-4df8-b3ef-dfc01f9ede00 Starting new HTTPS connection (1): cognito-idp.us-east-1.amazonaws.com:443 2022-02-14T17:38:23.926+02:00 [DEBUG] 2022-02-14T15:38:23.926Z 2ea5db18-c9b5-4df8-b3ef-dfc01f9ede00 https://cognito-idp.us-east-1.amazonaws.com:443 "GET /us-east-1_.../.well-known/jwks.json HTTP/1.1" 200 916 2022-02-14T17:38:23.942+02:00 [DEBUG] 2022-02-14T15:38:23.941Z 2ea5db18-c9b5-4df8-b3ef-dfc01f9ede00 Got the key a2PUhJTqMTiNysvmY+RfUPARHESV35jOMXWXJ4mAa/A= in 0.20495343208312988 seconds 2022-02-14T17:38:23.960+02:00 [INFO] 2022-02-14T15:38:23.960Z 2ea5db18-c9b5-4df8-b3ef-dfc01f9ede00 response {'principalId': '...', 'policyDocument': {...}} 2022-02-14T17:38:23.980+02:00 END RequestId: 2ea5db18-c9b5-4df8-b3ef-dfc01f9ede00 2022-02-14T17:38:23.980+02:00 REPORT RequestId: 2ea5db18-c9b5-4df8-b3ef-dfc01f9ede00 Duration: 244.45 ms Billed Duration: 245 ms Memory Size: 128 MB Max Memory Used: 55 MB Init Duration: 447.66 ms 2022-02-14T17:38:24.149+02:00 START RequestId: 1cca0b7a-0fa4-477d-9ddd-95d97db113b2 Version: $LATEST 2022-02-14T17:38:24.154+02:00 [DEBUG] 2022-02-14T15:38:24.154Z 1cca0b7a-0fa4-477d-9ddd-95d97db113b2 Got the cached key a2PUhJTqMTiNysvmY+RfUPARHESV35jOMXWXJ4mAa/A= 2022-02-14T17:38:24.155+02:00 [INFO] 2022-02-14T15:38:24.155Z 1cca0b7a-0fa4-477d-9ddd-95d97db113b2 response {'principalId': '...', 'policyDocument': {...}} 2022-02-14T17:38:24.156+02:00 END RequestId: 1cca0b7a-0fa4-477d-9ddd-95d97db113b2 2022-02-14T17:38:24.156+02:00 END RequestId: 1cca0b7a-0fa4-477d-9ddd-95d97db113b2 2022-02-14T17:38:24.156+02:00 REPORT RequestId: 1cca0b7a-0fa4-477d-9ddd-95d97db113b2 Duration: 2.64 ms Billed Duration: 3 ms Memory Size: 128 MB Max Memory Used: 55 MB
1
answers
0
votes
3
views
Alexei Nenno
asked 3 months ago

IAM permissions required for rds:RestoreDBClusterToPointInTime

Hi there, I am trying to figure out the required permissions for a role to call rds:RestoreDBClusterToPointInTime. https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBClusterToPointInTime.html gives me some clue but I am not sure what I came up with is safe. I am trying to clone an Aurora MySQL 2 cluster. Via RDS API, I use rds:RestoreDBClusterToPointInTime and then rds:CreateDBInstance. By try and fail, I got it working with the policy expcert below: { Effect = "Allow" Action = [ "rds:AddTagsToResource", "rds:CreateDBInstance", "rds:DeleteDBInstance", "rds:DeleteDBCluster", "rds:DescribeDBClusters", "rds:DescribeDBInstances", "rds:RestoreDBClusterToPointInTime" ] Resource = [ "arn:aws:rds:${data.aws_region.this.name}:${data.aws_caller_identity.this.account_id}:cluster:${var.destination_cluster_identifier}", "arn:aws:rds:${data.aws_region.this.name}:${data.aws_caller_identity.this.account_id}:cluster:${var.source_cluster_identifier}", "arn:aws:rds:${data.aws_region.this.name}:${data.aws_caller_identity.this.account_id}:cluster-pg:${aws_rds_cluster_parameter_group.this.name}", "arn:aws:rds:${data.aws_region.this.name}:${data.aws_caller_identity.this.account_id}:subgrp:${aws_db_subnet_group.this.name}", "arn:aws:rds:${data.aws_region.this.name}:${data.aws_caller_identity.this.account_id}:secgrp:${aws_security_group.rds.name}", "arn:aws:rds:${data.aws_region.this.name}:${data.aws_caller_identity.this.account_id}:db:${local.rds_instance_name}" ] } Where I am uncertain is how can we make rds:RestoreDBClusterToPointInTime one way. That is, being able to limit what is the source and what is the destination. It looks like both source and destination clusters must be in the Resource block. Therefore, we can't limit what cluster is source and what cluster is destination. Is there a way to do so?
1
answers
0
votes
6
views
ohmer
asked 3 months ago

Outbound Ports 80 and 443 being blocked from instance

So, this has been keeping me busy for the past couple of days. Started when I was troubleshooting the Paypal integration -- which is used only a couple of times a year when registration opens for an event. It worked fine in October, but suddenly it stopped working. I quickly figured out that the reason was that I couldn't connect to Paypal via port 443. Upon further testing, I discovered I couldn't connect to *anything* on port 80 or 443. Outbound SSH, FTP, and SMTP work fine from this instance. I checked the ACLs for the VCP, which are allow any/any. I checked my security group, which is also set to outbound any/any. As a note, *inbound* HTTP and HTTPS both work just fine -- the website is still up. Just that when I try to connect to anything else, even as root, it fails. I have checked the configuration of the server, and there's nothing in iptables, and the Ubuntu firewall is disabled. The server can connect to its own internal IP on port 80, but not its external IP. I have another instance running, and on that instance I can connect to its internal IP on port 80, but not its external IP. Reassociating the server with a different elastic IP gives the same behavior. The other server can reach the Internet just fine on ports 80/443. Things I have tried: 1. tcptraceroute fails immediately on the first hop. 2. All other ports that I have tried work fine. Just 80 and 443 seem to be affected. 3. The behavior started sometime in the last 3 months. 4. tcpdump sees the SYN packets going outbound and supposedly leaving the interface. So far, the only things I can think of that are consistent with the behavior: 1. The server has been compromised, or something got installed that is trying to capture/redirect all 80/443 traffic, but I can't think of anything or think where it would be. It would have to be intercepted at the kernel level for tcpdump to see the SYN packets and think they are going out of eth0. I'm not sure how to prove a negative here. I may try creating a new instance using this server's volume and see what happens there. 2. Something associated with this particular instance is blocking outbound traffic, possibly upstream of us. Does anyone know of any settings I haven't mentioned that would relate to this? Any ideas are appreciated!
2
answers
0
votes
13
views
Jharvre
asked 3 months ago

Cognito Identity Pools Attribute-based access control - dynamic attributes

I have hundreds of S3 buckets and dozens of users in Cognito User Pool. I want to be able to select which user can access which S3 bucket, for example: * `user_a` can access `bucket_1`, `bucket_2`, `bucket_3` * `user_b` can access `bucket_2` * `user_c` can access `bucket_1`, `bucket_4` and so on. I would love to be able to do it without creating a dedicated API creating a dynamic policies. I thought about utilising Cognito Identity Pools and Attribute-based access control. There [is a cool example](https://docs.aws.amazon.com/cognito/latest/developerguide/using-attributes-for-access-control-policy-example.html) where an user gets an attribute `"department": "legal"` and is then assigned a role that is allowed to query only the buckets with `-legal`suffix, thanks to `${aws:PrincipalTag/department}` magic. If my users were to access only one bucket, that would be a solution. **However, in my case a user could get assigned to dozens or hundreds of buckets** (think "multiple departments" in the example from AWS docs). I thought of using multiple custom attributes on each user: * `bucket_1: true` * `bucket_2: false` * `bucket_3: false` * ..and so on and creating a policy that allows you to access given `bucket_n` if and only if you have an attribute `bucket_n: true`. This would work if I had at most 50 buckets (the hard limit of Custom Attributes in Cognito). In my case, this value is slightly higher (a couple hundreds). I can have users having access to 200+ buckets as well as ones being allowed to only one bucket. Is there any way to achieve my goal with Cognito Identity Pools and IAM Policies?
0
answers
0
votes
4
views
blahblahblah2
asked 3 months ago

Access Control in Secrets Manager for Federated Users

My scenario: I have my users in Azure AD. This is connected to AWS Single Account SSO into an AWS Account using IAM SAML IDP (PS: we are not using AWS SSO Service). We are using AWS Secrets Manager and want to store per user secret using a secret name path (eg /usersecrets/<azure_ad_username>/<secret_name> When the users login using Azure AD auth, they automatically assume the IAM Role attached. I would like to do the following: Requirement1: 1. Allow users to list secrets, create secrets and get secret value for any secret which has a name /usersecrets/<azure_ad_username>/* (here the azure_ad_username is what AWS session sees when the assume role to login) 2. Deny access to any secret unless the request is coming from Federated user (i.e local IAM users in AWS account should not be able to see any secret in path /usersecrets/<azure_ad_username>/* Requirement2: In addition to the federates Azure AD users, I also want to allow a EC2 Instance Role to be able to Get/List/Describe any secret. This EC2 role is in same AWS account where secrets are and is attached to all Windows Servers. This IAM role is to allow SSM Run commands to execute on these Windows machines and fetch the secrets values (eg, to get the secret of a user and create a local windows user with same name and password as it is in secret manager using powershell. Questions: Can you help with some sample IAM Policy for the role or the secret manager resource policy I can use to meet both the requirements?
1
answers
0
votes
4
views
Alexa
asked 4 months ago

Cognito - CustomSMSSender InvalidCiphertextException: null on Code Decrypt (Golang)

Hi, i followed this document to customize cognito SMS delivery flow https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-lambda-custom-sms-sender.html I'm not working on a Javascript environment so wrote this Go snippet: ``` package main import ( "context" golog "log" "os" "github.com/aws/aws-lambda-go/events" "github.com/aws/aws-lambda-go/lambda" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/session" "github.com/aws/aws-sdk-go/service/kms" ) // USING THIS TYPES BECAUSE AWS-SDK-GO DOES NOT SUPPORTS THIS // CognitoEventUserPoolsCustomSmsSender is sent by AWS Cognito User Pools before each mail to send. type CognitoEventUserPoolsCustomSmsSender struct { events.CognitoEventUserPoolsHeader Request CognitoEventUserPoolsCustomSmsSenderRequest `json:"request"` } // CognitoEventUserPoolsCustomSmsSenderRequest contains the request portion of a CustomSmsSender event type CognitoEventUserPoolsCustomSmsSenderRequest struct { UserAttributes map[string]interface{} `json:"userAttributes"` Code string `json:"code"` ClientMetadata map[string]string `json:"clientMetadata"` Type string `json:"type"` } func main() { lambda.Start(sendCustomSms) } func sendCustomSms(ctx context.Context, event *CognitoEventUserPoolsCustomSmsSender) error { golog.Printf("received event=%+v", event) golog.Printf("received ctx=%+v", ctx) config := aws.NewConfig().WithRegion(os.Getenv("AWS_REGION")) session, err := session.NewSession(config) if err != nil { return err } kmsProvider := kms.New(session) smsCode, err := kmsProvider.Decrypt(&kms.DecryptInput{ KeyId: aws.String("a8a566c5-796a-4ba1-8715-c9c17c6f0cb5"), CiphertextBlob: []byte(event.Request.Code), }) if err != nil { return err } golog.Printf("decrypted code %v", smsCode.Plaintext) return nil } ``` i'm always getting `InvalidCiphertextException: : InvalidCiphertextException null`, can someone help? This is how lambda config looks on my user pool: ``` "LambdaConfig": { "CustomSMSSender": { "LambdaVersion": "V1_0", "LambdaArn": "arn:aws:lambda:eu-west-1:...:function:cognito-custom-auth-sms-sender-dev" }, "KMSKeyID": "arn:aws:kms:eu-west-1:...:key/a8a566c5-796a-4ba1-8715-c9c17c6f0cb5" }, ```
1
answers
0
votes
1
views
AWS-User-1153293
asked 4 months ago
  • 1
  • 90 / page