Questions tagged with Networking & Content Delivery
Content language: English
Sort by most recent
Limit access to MWAA Public Environment UI
I set up a public mwaa environment but i want to limit UI access to only specific IP range I tried to remove everything from the inbound security group that mwaa public environment is using but it is still accessible from the public internet, removing it also caused scheduler to crash but i added 5432 port and it is fixed, that is the only inbound rule that the environment has I am probably missing sth but not sure what Is it possible to limit access to UI ? Thanks
Private MWAA - Snowflake Connection Issue - Amazon Managed Workflows for Apache Airflow
I set up a private Airflow environment in AWS -v2.2.2-. Environment and plugins are up and running, I want to connect to Snowflake but I am getting the error below . -whl files in plugins.zip using requirements.txt- ``` snowflake.connector.vendored.urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='......snowflakecomputing.com', port=443): Max retries exceeded with url: /session/v1/login-request?request_id=....... (Caused by ConnectTimeoutError(<snowflake.connector.vendored.urllib3.connection.HTTPSConnection object at >, 'Connection to ........snowflakecomputing.com timed out. (connect timeout=60)')) ``` Same connection works in public mwaa. I am adding connection informations into admin-connections tab from the UI. I know private env does not have connection to internet If i want to connect to any api also i get a timeout since subnets not connected to internet Also private mwaa environment is running on an existing vpc that has igw attachment, but the subnets that mwaa is running doesn't have any igw or nat attachment -as documentation suggests- I checked all the documentations but there are no information for connectivity via private environment, how can i solve my issue ?
NICE DCV connection from OpenVPN and lack of gpu
Hi, I installed Nice-DCV server on Ubuntu 22.04 on my ec2 instance. The ec2 instance is a m6i, and Nice-DCV installation guide mentions that I can do without gpu. My first question is: is that true? Can I run a remote desktop client (Nice-DCV-viewer in this case) when my instance does not have a gpu? Secondly, my company uses openvpn3 for us to connect to ec2 instances, i.e. no public ip, but from my computer I can ssh to the instances using only the private ips. Now, when I try to connect to the instance using Nice-DCV-viewer from my computer (with openvpn running), I still get the message that the connection was refused. What could cause this? I have setup the security group for TCP/UDP on port 8443 and an IAM role for Nice-DCV license. -Davood
Is It Possible to Make an EC2 Instance Part of a VPN Protected by Global Protect
What am I running? * EC2 instance Ubuntu 22.04 with a static elastic ip address * The instance has only one network interface, whose details say it is an Elastic network interface. (I believed every instance has a primary network interface, but I do not see any PNI). What I want to do? My company has an on-prem virtual machine running MSSQL server at 192.168.181.75:1433, but that is behind the globalprotect VPN from Palo Alto Networks. Even when I make a call to that database, I have to connect to global protect manually from my laptop. So my question is, is there any special step I need to take to make the EC2 part of the globalprotect network? I talked to my company network administrator, who want the public IP address of the EC2 instance (which I use for SSH) and the mac address. I got the mac address by entering ``` $ ip addr ``` in the terminal, under the *ens3* interface. But can I assume these two will remain fixed across stopping and restarting the instance? Also, the inbound/outbound rules have to be altered? Some readings led me to believe I have to create an ENI, as the primary network interfaces do not support it. But when I checked the instance details, it seems the only interface present is an ENI.
How to pass connection information between two dependent windows instances during cloudformation?
I am doing a lift and shift with software from an on-premises architecture. There are two servers (main and auxiliary) that have to talk to one another over the network. I currently have tested and confirmed that I can manually add their hostnames and private IP address to the hosts file (`"C:\Windows\System32\drivers\etc\hosts"`) and the software works fine. For those that don't know, this file is used by Windows to map a network hostname like `EC2AM-1A2B3C` to a IP address. So if I added the hostname and IP address of the main server into the hosts file of the auxiliary server, then the auxiliary server could route to the main server. (i.e. `PS> ping EC2AM-1A2B3C` would then work). How could I pass the required information to both servers? They both have to know the other server's private IP address and hostname. If this is not possible at server spin-up time, how might the servers connect and pass this information? I would really like to automate this if possible.
AWS Lightsail Firewall
Hello, I am using AWS Lightsail to host my website. Using Cloudflare DNS + WAF for protection. I am trying to whitelist the Cloudflare IPs on the AWS infra but after defining the ACL, the site becomes unreachable. When i remove the ACL, site is back online. I am making firewall rules for http and https. Am i missing anything? https://www.cloudflare.com/en-gb/ips/ 188.8.131.52/20 184.108.40.206/22 220.127.116.11/22 18.104.22.168/22 22.214.171.124/18 126.96.36.199/18 188.8.131.52/20 184.108.40.206/20 220.127.116.11/22 18.104.22.168/17 22.214.171.124/15 126.96.36.199/13 188.8.131.52/14 184.108.40.206/13 220.127.116.11/22 2400:cb00::/32 2606:4700::/32 2803:f800::/32 2405:b500::/32 2405:8100::/32 2a06:98c0::/29 2c0f:f248::/32
How does EC2 hop to a publicly accessible RDS endpoint?
Hey team, say I have an RDS endpoint that's publicly available. I then access this endpoint from an EC2 instance. What happens at the network layer? Does the request go to the public internet? Ideally, the system would know that the we're inside the same vpc and hop right over. How could I confirm this?
How can I use Cloudfront with a root domain name?
I set up a Cloudfront distribution. I use a non-AWS domain registrar and DNS. I want my distribution to respond to "https://mydomain.com", but there is a problem. Cloudfront provides a domain name and asks you to create a CNAME record in DNS, but you can't create a CNAME record that points to the root domain or "@", like you can with a regular A record. To get around the problem, I set up "www.mydomain.com" as the CNAME record. If I type "https://www.mydomain.com" into my browser it works, but of course "mydomain.com" without "www" does not work. The next thing I did was create a permanent redirect in DNS that should redirect mydomain.com to www.mydomain.com. Now I can type "http://mydomain.com" and it redirects to www.mydomain.com and it works. But if I type "https://mydomain.com" (with HTTPS instead of HTTP) it does not work. I presume that this is because whatever server is implementing the redirect (I use GoDaddy) doesn't have my SSL certificate so the connection can't be made. I'm not sure how to resolve this problem. What I need, I think, is some web server that is on a fixed IP address and also has my SSL certificate, and can simply respond to all requests with the permanent redirect response. The only way I can think of to do this in AWS would be to set up an entire EC2 instance with my own web server, which is a lot of work and cost. Is there a better solution? My company doesn't want to move our DNS or domain registration to AWS, so using something like Route 53 is probably not an option. Thanks, Frank
Route53 redirecting ServiceNow Instance directly to ServiceNow.com
I have a hosted zone set up in AWS route53 (labs.mycompany.com) and a Simple CNAME record inside that hosted zone (servicenow.labs.mycompany.com) and have configured it to redirect to our ServiceNow instance (dev12345.service-now.com) but rather than going to the instance it is redirecting to servicenow.com directly. I did a DiG and the DNS record appears to be accurate and fine. I tried a curl and and I'm guessing the redirect is from http to https which I believe is standard, I get SSL errors when trying http which is to be expected. I can only assume at this point that service now must be doing some kind of domain filtering at the load balancer and the website I'm getting redirected to is just the default target when no patterns match. How do I work around this so I can redirect our URL directly to our servicenow instance? Thanks
Hi guys, Datasync was working just perfectly fine and now it doesn't works, Network connectivity test fails with the following answer "SSL Test failed: no certificate issuer found"
Hi Guys, We were transferring TB with datasync, everything was working just fine but one day I wanted to start a new task and the Agent was offline. This is the first time the agent was in that state. We logged into the agents console and tried the "network connectivity test" and everything went wrong. All the tests failed with the same answer "SSL Test Failed". I am including a picture of that. I would be grateful if you could help me ![Enter image description here](/media/postImages/original/IM-uMrXHvuRNWCQCrvkXgB6w)