By using AWS re:Post, you agree to the Terms of Use

Questions tagged with Amazon EC2

Sort by most recent

Browse through the questions and answers listed below or filter and sort to narrow down your results.

EKS Node Not Ready

I have EKS cluster with 4 t.3large which has approx ~50 pods(small size). Often whenever I tried to update the application version from x to y and then few nodes goes into not ready state. Then I have to clean few resources and reboot the worker node and then situation back to normal. Any suggestions ? Logs from kube-proxy I0927 16:12:05.785853 1 proxier.go:790] "SyncProxyRules complete" elapsed="104.231873ms" I0927 16:18:27.078985 1 trace.go:205] Trace[1094698301]: "iptables ChainExists" (27-Sep-2022 16:16:36.489) (total time: 66869ms): Trace[1094698301]: [1m6.869976178s] [1m6.869976178s] END I0927 16:18:27.087821 1 trace.go:205] Trace[1957650533]: "iptables ChainExists" (27-Sep-2022 16:16:36.466) (total time: 67555ms): Trace[1957650533]: [1m7.555663612s] [1m7.555663612s] END I0927 16:18:27.124923 1 trace.go:205] Trace[460012371]: "DeltaFIFO Pop Process" ID:monitoring/prometheus-prometheus-node-exporter-gslfb,Depth:36,Reason:slow event handlers blocking the queue (27-Sep-2022 16:18:26.836) (total time: 186ms): Trace[460012371]: [186.190275ms] [186.190275ms] END W0927 16:18:27.248231 1 reflector.go:442] k8s.io/client-go/informers/factory.go:134: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding W0927 16:18:27.272469 1 reflector.go:442] k8s.io/client-go/informers/factory.go:134: watch of *v1.EndpointSlice ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding I0927 16:18:31.339045 1 trace.go:205] Trace[1140734081]: "DeltaFIFO Pop Process" ID:cuberun/cuberun-cuberun,Depth:42,Reason:slow event handlers blocking the queue (27-Sep-2022 16:18:30.696) (total time: 116ms): Trace[1140734081]: [116.029921ms] [116.029921ms] END I0927 16:18:32.403993 1 trace.go:205] Trace[903972463]: "DeltaFIFO Pop Process" ID:cuberundemo/cuberun-cuberundemo,Depth:41,Reason:slow event handlers blocking the queue (27-Sep-2022 16:18:31.657) (total time: 196ms): Trace[903972463]: [196.24798ms] [196.24798ms] END I0927 16:18:33.233172 1 trace.go:205] Trace[1265312678]: "DeltaFIFO Pop Process" ID:argocd/argocd-metrics,Depth:40,Reason:slow event handlers blocking the queue (27-Sep-2022 16:18:32.738) (total time: 359ms): Trace[1265312678]: [359.090093ms] [359.090093ms] END I0927 16:18:33.261077 1 proxier.go:823] "Syncing iptables rules" I0927 16:18:35.474678 1 proxier.go:790] "SyncProxyRules complete" elapsed="2.867637015s" I0927 16:18:35.587939 1 proxier.go:823] "Syncing iptables rules" I0927 16:18:37.014157 1 proxier.go:790] "SyncProxyRules complete" elapsed="1.45321438s" I0927 16:19:08.904513 1 trace.go:205] Trace[1753182031]: "iptables ChainExists" (27-Sep-2022 16:19:06.254) (total time: 2266ms): Trace[1753182031]: [2.266311394s] [2.266311394s] END I0927 16:19:08.904456 1 trace.go:205] Trace[228375231]: "iptables ChainExists" (27-Sep-2022 16:19:06.299) (total time: 2255ms): Trace[228375231]: [2.255433291s] [2.255433291s] END I0927 16:19:40.540864 1 trace.go:205] Trace[2069259157]: "iptables ChainExists" (27-Sep-2022 16:19:36.494) (total time: 3430ms): Trace[2069259157]: [3.430008597s] [3.430008597s] END I0927 16:19:40.540873 1 trace.go:205] Trace[757252858]: "iptables ChainExists" (27-Sep-2022 16:19:36.304) (total time: 3619ms): Trace[757252858]: [3.61980147s] [3.61980147s] END I0927 16:20:09.976580 1 trace.go:205] Trace[2070318544]: "iptables ChainExists" (27-Sep-2022 16:20:06.285) (total time: 3182ms): Trace[2070318544]: [3.182449365s] [3.182449365s] END I0927 16:20:09.976592 1 trace.go:205] Trace[852062251]: "iptables ChainExists" (27-Sep-2022 16:20:06.313) (total time: 3154ms): Trace[852062251]: [3.154369999s] [3.154369999s] END
0
answers
0
votes
16
views
asked 10 days ago

Stuck in stopping state

Hello there, I have launched EC2 instance with hibernate mode enabled from custom Ubuntu-20 based AMI. When I choose hibernate option for this EC2 instance, its taking more than 20 minutes to change from stopping to stopped state. I don't know why its taking this much time to take stopped state. I tried with multiple EC2 instances from this custom AMI. All launched instances from this AMI, taking more than 20 minutes to stopped state. I increased my root volume size too, now this root volume has more than 15 GB free space. However, its still taking more time to stopped state when I choose hibernate option from console. I can see the hibernate related logs from /var/log/syslog. Can any one please help me to outdo from this issues? > Sep 27 12:17:41 SparxEA systemd[1]: Starting EC2 instance hibernation setup agent... Sep 27 12:17:41 SparxEA /hibinit-agent: Effective config: {'log_to_syslog': True, 'log_to_stderr': True, 'mkswap': 'mkswap {swapfile}', 'swapon': 'swapon {swapfile}', 'swapoff': 'swapoff {swapfile}', 'touch_swap': False, 'grub_update': True, 'swap_percentage': 100, 'swap_mb': 4000} Sep 27 12:17:41 SparxEA /hibinit-agent: Will check if swap is at least: 4000 megabytes Sep 27 12:17:41 SparxEA /hibinit-agent: Create swap and initialize it Sep 27 12:17:41 SparxEA hibinit-agent[1101]: Effective config: {'log_to_syslog': True, 'log_to_stderr': True, 'mkswap': 'mkswap {swapfile}', 'swapon': 'swapon {swapfile}', 'swapoff': 'swapoff {swapfile}', 'touch_swap': False, 'grub_update': True, 'swap_percentage': 100, 'swap_mb': 4000} Sep 27 12:17:41 SparxEA hibinit-agent[1101]: Will check if swap is at least: 4000 megabytes Sep 27 12:17:41 SparxEA hibinit-agent[1101]: Create swap and initialize it Sep 27 12:17:41 SparxEA /hibinit-agent: kicking child process to initiate the setup Sep 27 12:17:41 SparxEA /hibinit-agent: Allocating 4194304000 bytes in /swap-hibinit Sep 27 12:17:41 SparxEA /hibinit-agent: Swap pre-heating is skipped, the swap blocks won't be touched during to ensure they are ready Sep 27 12:17:41 SparxEA /hibinit-agent: Running: mkswap /swap-hibinit Sep 27 12:17:41 SparxEA systemd[1]: Started EC2 instance hibernation setup agent. Sep 27 12:17:41 SparxEA hibinit-agent[1105]: Setting up swapspace version 1, size = 3.9 GiB (4194299904 bytes) Sep 27 12:17:41 SparxEA hibinit-agent[1105]: no label, UUID=16350ccb-d242-40f1-93f5-9fbe280d33ce Sep 27 12:17:41 SparxEA /hibinit-agent: Running: swapon /swap-hibinit Sep 27 12:17:41 SparxEA kernel: [ 25.160330] Adding 4095996k swap on /swap-hibinit. Priority:-2 extents:16 across:11141120k SSFS Sep 27 12:17:41 SparxEA /hibinit-agent: Updating the kernel offset for the swapfile: /swap-hibinit Sep 27 12:17:41 SparxEA /hibinit-agent: Updating GRUB to use the device PARTUUID=4986e35b-1bd5-45d3-b528-fa2edb861a38 with offset 4161536 for resume Sep 27 12:17:42 SparxEA hibinit-agent[1112]: Sourcing file `/etc/default/grub' Sep 27 12:17:42 SparxEA hibinit-agent[1112]: Sourcing file `/etc/default/grub.d/40-force-partuuid.cfg' Sep 27 12:17:42 SparxEA hibinit-agent[1112]: Sourcing file `/etc/default/grub.d/50-cloudimg-settings.cfg' Sep 27 12:17:42 SparxEA hibinit-agent[1112]: Sourcing file `/etc/default/grub.d/99-set-swap.cfg' Sep 27 12:17:42 SparxEA hibinit-agent[1112]: Sourcing file `/etc/default/grub.d/init-select.cfg' Sep 27 12:17:42 SparxEA hibinit-agent[1187]: Generating grub configuration file ... Sep 27 12:17:42 SparxEA hibinit-agent[1245]: GRUB_FORCE_PARTUUID is set, will attempt initrdless boot Sep 27 12:17:42 SparxEA hibinit-agent[1245]: Found linux image: /boot/vmlinuz-5.15.0-1019-aws Sep 27 12:17:42 SparxEA hibinit-agent[1245]: Found initrd image: /boot/microcode.cpio /boot/initrd.img-5.15.0-1019-aws Sep 27 12:17:43 SparxEA hibinit-agent[1245]: Found linux image: /boot/vmlinuz-5.13.0-1029-aws Sep 27 12:17:43 SparxEA hibinit-agent[1245]: Found initrd image: /boot/microcode.cpio /boot/initrd.img-5.13.0-1029-aws Sep 27 12:17:43 SparxEA hibinit-agent[1740]: Found memtest86+ image: /boot/memtest86+.elf Sep 27 12:17:43 SparxEA hibinit-agent[1740]: Found memtest86+ image: /boot/memtest86+.bin Sep 27 12:17:45 SparxEA hibinit-agent[1823]: Found Ubuntu 20.04.5 LTS (20.04) on /dev/nvme0n1p1 Sep 27 12:17:46 SparxEA hibinit-agent[3078]: done Sep 27 12:17:46 SparxEA /hibinit-agent: GRUB configuration is updated Sep 27 12:17:46 SparxEA /hibinit-agent: Setting swap device to 66305 with offset 4161536 Sep 27 12:17:46 SparxEA /hibinit-agent: Done updating the swap offset. Turning swapoff Sep 27 12:17:46 SparxEA /hibinit-agent: Running: swapoff /swap-hibinit Sep 27 12:17:46 SparxEA systemd[1]: swap\x2dhibinit.swap: Succeeded. Sep 27 12:17:46 SparxEA systemd[877]: swap\x2dhibinit.swap: Succeeded. Sep 27 12:17:46 SparxEA hibinit-agent[1103]: Effective config: {'log_to_syslog': True, 'log_to_stderr': True, 'mkswap': 'mkswap {swapfile}', 'swapon': 'swapon {swapfile}', 'swapoff': 'swapoff {swapfile}', 'touch_swap': False, 'grub_update': True, 'swap_percentage': 100, 'swap_mb': 4000} Sep 27 12:17:46 SparxEA hibinit-agent[1103]: Will check if swap is at least: 4000 megabytes Sep 27 12:17:46 SparxEA hibinit-agent[1103]: Create swap and initialize it Sep 27 12:17:46 SparxEA hibinit-agent[1103]: kicking child process to initiate the setup Sep 27 12:17:46 SparxEA hibinit-agent[1103]: Allocating 4194304000 bytes in /swap-hibinit Sep 27 12:17:46 SparxEA hibinit-agent[1103]: Swap pre-heating is skipped, the swap blocks won't be touched during to ensure they are ready Sep 27 12:17:46 SparxEA hibinit-agent[1103]: Running: mkswap /swap-hibinit Sep 27 12:17:46 SparxEA hibinit-agent[1103]: Running: swapon /swap-hibinit Sep 27 12:17:46 SparxEA hibinit-agent[1103]: Updating the kernel offset for the swapfile: /swap-hibinit Sep 27 12:17:46 SparxEA hibinit-agent[1103]: Updating GRUB to use the device PARTUUID=4986e35b-1bd5-45d3-b528-fa2edb861a38 with offset 4161536 for resume Sep 27 12:17:46 SparxEA hibinit-agent[1103]: GRUB configuration is updated Sep 27 12:17:46 SparxEA hibinit-agent[1103]: Setting swap device to 66305 with offset 4161536 Sep 27 12:17:46 SparxEA hibinit-agent[1103]: Done updating the swap offset. Turning swapoff Sep 27 12:17:46 SparxEA hibinit-agent[1103]: Running: swapoff /swap-hibinit Sep 27 12:17:46 SparxEA systemd[1]: hibinit-agent.service: Succeeded.
1
answers
0
votes
30
views
asked 10 days ago

Django Daphne Websocket Access Denied

We need to establish a "Web socket connection" to our AWS servers using Django, Django channels, Redis, and Daphne Nginx Config. Currently local and on-premises config is configured properly and needs help in configuring the same communication with the staging server. We tried adding the above config to our servers but got an error of access denied with response code 403 from the server for web socket request. below is the **Nginx config** for staging ``` server { listen 80; server_name domain_name.com domain_name_2.com; root /var/www/services/project_name_frontend/; index index.html; location ~ ^/api/ { rewrite ^/api/(.*) /$1 break; proxy_pass http://unix:/var/www/services/enerlly_backend/backend/backend.sock; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; proxy_read_timeout 30; proxy_connect_timeout 30; proxy_send_timeout 30; send_timeout 30; proxy_redirect ~^/(.*) $scheme://$host/api/$1; } location /ws { try_files $uri @proxy_to_ws; } location @proxy_to_ws { proxy_pass http://127.0.0.1:8001; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_redirect off; } location ~ ^/admin/ { proxy_pass http://unix:/var/www/services/project_name_backend/backend/backend.sock; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; proxy_read_timeout 30; proxy_connect_timeout 30; proxy_send_timeout 30; send_timeout 30; proxy_redirect off; } location /staticfiles/ { alias /var/www/services/project_name_backend/backend/staticfiles/; } location /mediafiles/ { alias /var/www/services/project_name_backend/backend/mediafiles/; } location / { try_files $uri /index.html; } } ``` and **Systemctl service** to execute Django Daphne service ``` [Unit] Description=Backend Project Django WebSocket daemon After=network.target [Service] User=root Group=www-data WorkingDirectory=/var/www/services/project_name_backend ExecStart=/home/ubuntu/project_python_venv/bin/python /home/ubuntu/project_python_venv/bin/daphne -b 0.0.0.0 -p 8001 project_name_backend.prod_asgi:application [Install] WantedBy=multi-user.target ``` **Below is the Load Balancer security group config inbound rules** ![Enter image description here](/media/postImages/original/IMN2LT2BlTSmK0PHEAu5dwHQ) **Listner Config for Load Balancer** ![Enter image description here](/media/postImages/original/IMxBGKpaJOSrSsOQyn5FEt-Q) ![Enter image description here](/media/postImages/original/IMktSIYK0ZSOy8GzYyR-DI_w)
0
answers
0
votes
8
views
asked 10 days ago

Not using "noexec" with "/run" mount, on EC2 Ubuntu 22.04.1 LTS

I believe this *might* be a security issue, as [this happened in 2014](https://www.tenable.com/plugins/nessus/73180), but would rather not pay $29 for "Premium Support". It looks like the `initramfs` is not always mounting the `/run` partition as `noexec`. A stock `Ubuntu 22.04` install shows the `noexec` mount option is present ([source](https://askubuntu.com/a/1432445/924107)), so I suspect one of the AWS modifications has affected this? I can check four EC2 servers that are running `Ubuntu 22.04.1 LTS`, three of them upgraded from `Ubuntu 20.04.5`, the other started new a few weeks ago... oddly, two of the upgraded servers have kept the `noexec`. ``` # New server # Launched: Sep 02 2022 # AMI name: ubuntu/images/hvm-ssd/ubuntu-jammy-22.04-amd64-server-20220609 mount | grep '/run ' tmpfs on /run type tmpfs (rw,nosuid,nodev,size=803020k,nr_inodes=819200,mode=755,inode64) uname -a Linux HostB 5.15.0-1020-aws #24-Ubuntu SMP Thu Sep 1 16:04:17 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux ``` ``` # Upgraded server # Launched: Apr 25 2022 # AMI name: ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20211129 mount | grep '/run ' tmpfs on /run type tmpfs (rw,nosuid,nodev,size=94812k,nr_inodes=819200,mode=755,inode64) uname -a Linux HostA 5.15.0-1020-aws #24-Ubuntu SMP Thu Sep 1 16:04:17 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux ``` ``` # Upgraded server # Launched: Nov 16 2021 # AMI name: ubuntu/images/hvm-ssd/ubuntu-xenial-16.04-amd64-server-20180522 mount | grep '/run ' tmpfs on /run type tmpfs (rw,nosuid,nodev,noexec,relatime,size=47408k,mode=755,inode64) uname -a Linux HostC 5.15.0-1020-aws #24-Ubuntu SMP Thu Sep 1 16:04:17 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux ``` ``` # Upgraded server # Launched: Feb 10 2017 # AMI name: ubuntu/images/hvm-ssd/ubuntu-xenial-16.04-amd64-server-20170113 mount | grep '/run ' tmpfs on /run type tmpfs (rw,nosuid,nodev,noexec,relatime,size=202012k,mode=755,inode64) uname -a Linux HostD 5.15.0-48-generic #54-Ubuntu SMP Fri Aug 26 13:26:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux ``` --- Update 2022-09-28: Thanks to [Andrew Lowther](https://askubuntu.com/a/1432445/924107), it looks like a **temporary** work around is to use the details in this [initramfs does not get loaded](https://bugs.launchpad.net/ubuntu/+source/livecd-rootfs/+bug/1870189) bug report: mv /etc/default/grub.d/40-force-partuuid.cfg{,.bak}; update-grub;
1
answers
0
votes
43
views
asked 10 days ago