By using AWS re:Post, you agree to the Terms of Use

Security Identity & Compliance

Securely run your business with the most flexible and secure cloud computing environment available. Benefit from AWS data centers and a network architected to protect your information, applications, and devices. Meet core security requirements, such as data locality, protection, and confidentiality with our comprehensive services and features.

Recent questions

see all
1/18

What to look at for resolving Nice DCV 404 errors

I've got an EC2 instance setup with Nice DCV. I have setup port access in my security rules and created a session in nice dcv. However, whenever I try to connect to the session via the browsed, I get an HTTP ERROR 404. I can't seem to find any information in the Nice DCV docs about causes of 404 except for the session resolver which I'm not using. How can I go about resolving this issue? Below is the output from dcv list-sessions -j ``` [ { "id" : "cloud9-session", "owner" : "ubuntu", "num-of-connections" : 0, "creation-time" : "2022-09-23T12:58:40.919860Z", "last-disconnection-time" : "", "licenses" : [ { "product" : "dcv", "status" : "licensed", "check-timestamp" : "2022-09-23T12:58:42.540422Z", "expiration-date" : "" }, { "product" : "dcv-gl", "status" : "licensed", "check-timestamp" : "2022-09-23T12:58:42.540422Z", "expiration-date" : "" } ], "licensing-mode" : "EC2", "storage-root" : "", "type" : "virtual", "status" : "running", "x11-display" : ":0", "x11-authority" : "/run/user/1000/dcv/cloud9-session.xauth", "display-layout" : [ { "width" : 800, "height" : 600, "x" : 0, "y" : 0 } ] } ] ``` This is the output from dcv get-config ``` [connectivity] web-use-https = false web-port = 8080 web-extra-http-headers = [('test-header', 'test-value')] [security] authentication = 'none' ``` This is the output from systemctl status dcvserver ``` ● dcvserver.service - NICE DCV server daemon Loaded: loaded (/lib/systemd/system/dcvserver.service; enabled; vendor preset: enable> Active: active (running) since Fri 2022-09-23 12:58:40 UTC; 18min ago Main PID: 715 (dcvserver) Tasks: 6 (limit: 76196) Memory: 39.9M CGroup: /system.slice/dcvserver.service ├─715 /bin/bash /usr/bin/dcvserver -d --service └─724 /usr/lib/x86_64-linux-gnu/dcv/dcvserver --service Sep 23 12:58:40 ip-10-0-0-115 systemd[1]: Starting NICE DCV server daemon... Sep 23 12:58:40 ip-10-0-0-115 systemd[1]: Started NICE DCV server daemon. ``` I'm trying to access the page with http://<public ip>:8080 I've also tried including the #session_id part in the url and using the windows client with no luck. My operating system is Ubuntu 20.04 with a custom AMI running in a g4dn.4xlarge machine.
0
answers
0
votes
18
views
asked 2 days ago

Network Firewall shows "aws:alert_strict action" when it set with Strict Order stateful engine option.

Hello, I'm using AWS Network Firewall. Firstly, I tried to use AWS Managed Rules and Allow Domain List custom rule with default action order. From my understanding, the default action order is Pass -> Drop -> Alert. Then, I tried to test download files from allowed domain list it always pass because the domain is allowed. The **ThreatSignaturesMalwareCoinmining** will not perform any actions. Am I correct? So, I'm trying to change from default action order to strict order. The default actions are drop:all and alert:all. I expected that the network firewall will process my rule groups by priority and rules in each rule group by order. I copied Suricata context from AWS Managed Rule and created new rule group as shown in pictures. ![Enter image description here](/media/postImages/original/IMT6cNSaDhTbGF4Ym0R7I1sQ) ![Enter image description here](/media/postImages/original/IMQKpehfhvQdCQLbXZVvTS4g) My example allowed domain are AWS domains. pass http $HOME_NET any -> $EXTERNAL_NET 80 (http.host; dotprefix; content:".amazonaws.com"; endswith; msg:"Allow HTTP traffic to .amazonaws.com"; flow:to_server, established; sid:1000101; rev:1;) pass tls $HOME_NET any -> $EXTERNAL_NET 443 (tls.sni; dotprefix; content:".amazonaws.com"; endswith; msg:"Allow TLS traffic to .amazonaws.com"; flow:to_server, established; sid:1000102; rev:1;) Then, I added these rules into my firewall policy and I found that it stills block the traffic to .amazonaws.com. ``` { "firewall_name": "inspector", "availability_zone": "ap-southeast-1a", "event_timestamp": "1663828976", "event": { "timestamp": "2022-09-22T06:42:56.727635+0000", "flow_id": 1066945104298575, "event_type": "alert", "src_ip": "10.x.x.x", "src_port": 23602, "dest_ip": "3.0.186.102", "dest_port": 443, "proto": "TCP", "alert": { "action": "blocked", "signature_id": 2, "rev": 0, "signature": "aws:alert_strict action", "category": "", "severity": 3 } } } ``` I checked 3.0.186.102 is own by AWS, ec2-xxx.amazonaws.com. Why the network firewall always block the requests to AWS domain?
4
answers
0
votes
43
views
asked 3 days ago

How to create parent policy that limits permissions of child policies it creates

- The context: I am Account A. In my master/parent policy that I am given, I will be able to create, update, and delete policies/roles AND other infrastructure resources in Account B. - The goal: I want to craft this master policy to be able to manage ONLY the resources I have created. - NOT the problem: trust relationships, external ids, confused deputy, getting access to Account B etc - IS the problem: I don't know of a way to enforce that all child policies that I create must also have all of the conditions that the parent policy has. Therefore, a child policy could be created which much greater permissions than the parent policy, defeating the purpose of limiting access to only the resources I have created. - CLARIFYING SCENARIO: I could, in the master/parent policy giving access to Account A, provide the condition that all resources, child policies, child roles, etc in Account B MUST be created with tags and MUST have the tags to be updated or deleted. HOWEVER, while I can create policies that say, have that tag, I do not know of any way to enforce that THOSE child policies must ALSO include the EXACT SAME condition that they too can ONLY create/update/delete tagged resources. How might parent policy conditions be enforced in all child policies such that nothing created could have greater permissions than the creator? If this doesn't exist, it seems like a massive oversight in permissions management in AWS.
2
answers
0
votes
36
views
asked 4 days ago

Popular users

see all
1/18

Learn AWS faster by following popular topics

1/1