By using AWS re:Post, you agree to the Terms of Use

Unanswered Questions tagged with Amazon EC2

Sort by most recent

Browse through the questions and answers listed below or filter and sort to narrow down your results.

bootstrap failure due to requirement of arm64 version of numpy required for r6 instances?

Was trying to upgrade to the latest r6 instances from r5s and ran into an issue with installing numpy in our bootstrap script via pip. Found[ this post](https://repost.aws/questions/QUdF4dL0k9RTeAZaUFiPDJCw/emr-bootstrap-script-with-pip-numpy-installation-fails-on-r-6-instances) that is related to my issue. Was anyone able to resolve this without building your own wheel file of the arm64 version of numpy? EC2/EMR Cluster Config ``` Release label: emr-6.5.0 Instance Type: r6gd.8xlarge ``` Snippet of the bootstrap ``` #!/bin/bash # python version pyv="$(python3 -V 2>&1)" echo "Python version: $pyv" # misc code to link up the requirements.txt echo "`date -u` install python dependencies" #Install Python deps sudo python3 -m pip install wheel sudo python3 -m pip install -r requirements.txt ``` requirements.txt ``` boto3==1.18.46 Cython==0.29.24 pandas==1.3.3 numpy==1.21.2 ``` Log Output ``` + echo 'Python version: Python 3.7.10' ... + echo 'Thu Sep 29 22:16:46 UTC 2022 install python dependencies' + sudo python3 -m pip install wheel WARNING: Running pip install with root privileges is generally not a good idea. Try `python3 -m pip install --user` instead. WARNING: The script wheel is installed in '/usr/local/bin' which is not on PATH. Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location. + sudo python3 -m pip install -r job-requirements.txt WARNING: Running pip install with root privileges is generally not a good idea. Try `python3 -m pip install --user` instead. ERROR: Command errored out with exit status 1: command: /bin/python3 /usr/lib/python3.7/site-packages/pip install --ignore-installed --no-user --prefix /mnt/tmp/pip-build-env-yy928eo_/overlay --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.org/simple -- 'cython >= 0.29' 'numpy==1.14.5; python_version<'"'"'3.7'"'"'' 'numpy==1.16.0; python_version>='"'"'3.7'"'"'' setuptools setuptools_scm wheel cwd: None Complete output (866 lines): WARNING: Running pip install with root privileges is generally not a good idea. Try `pip install --user` instead. Ignoring numpy: markers 'python_version < "3.7"' don't match your environment Collecting cython>=0.29 Using cached Cython-0.29.32-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl (1.8 MB) Collecting numpy==1.16.0 Downloading numpy-1.16.0.zip (5.1 MB) Collecting setuptools Downloading setuptools-65.4.0-py3-none-any.whl (1.2 MB) Collecting setuptools_scm Downloading setuptools_scm-7.0.5-py3-none-any.whl (42 kB) Collecting wheel Using cached wheel-0.37.1-py2.py3-none-any.whl (35 kB) Collecting packaging>=20.0 Downloading packaging-21.3-py3-none-any.whl (40 kB) Collecting tomli>=1.0.0 Downloading tomli-2.0.1-py3-none-any.whl (12 kB) Collecting typing-extensions Downloading typing_extensions-4.3.0-py3-none-any.whl (25 kB) Collecting importlib-metadata; python_version < "3.8" Downloading importlib_metadata-4.12.0-py3-none-any.whl (21 kB) Collecting pyparsing!=3.0.5,>=2.0.2 Downloading pyparsing-3.0.9-py3-none-any.whl (98 kB) Collecting zipp>=0.5 Downloading zipp-3.8.1-py3-none-any.whl (5.6 kB) ... _configtest.c:1:10: fatal error: Python.h: No such file or directory #include <Python.h> ^~~~~~~~~~ compilation terminated. failure. removing: _configtest.c _configtest.o Traceback (most recent call last): File "<string>", line 1, in <module> File "/mnt/tmp/pip-install-tl9eju6y/numpy/setup.py", line 415, in <module> setup_package() File "/mnt/tmp/pip-install-tl9eju6y/numpy/setup.py", line 407, in setup_package setup(**metadata) File "/mnt/tmp/pip-install-tl9eju6y/numpy/numpy/distutils/core.py", line 171, in setup return old_setup(**new_attr) File "/usr/lib/python3.7/site-packages/setuptools/__init__.py", line 165, in setup return distutils.core.setup(**attrs) File "/usr/lib64/python3.7/distutils/core.py", line 148, in setup dist.run_commands() File "/usr/lib64/python3.7/distutils/dist.py", line 966, in run_commands self.run_command(cmd) File "/usr/lib64/python3.7/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/mnt/tmp/pip-install-tl9eju6y/numpy/numpy/distutils/command/install.py", line 62, in run r = self.setuptools_run() File "/mnt/tmp/pip-install-tl9eju6y/numpy/numpy/distutils/command/install.py", line 36, in setuptools_run return distutils_install.run(self) File "/usr/lib64/python3.7/distutils/command/install.py", line 556, in run self.run_command('build') File "/usr/lib64/python3.7/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/usr/lib64/python3.7/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/mnt/tmp/pip-install-tl9eju6y/numpy/numpy/distutils/command/build.py", line 47, in run old_build.run(self) File "/usr/lib64/python3.7/distutils/command/build.py", line 135, in run self.run_command(cmd_name) File "/usr/lib64/python3.7/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/usr/lib64/python3.7/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/mnt/tmp/pip-install-tl9eju6y/numpy/numpy/distutils/command/build_src.py", line 148, in run self.build_sources() File "/mnt/tmp/pip-install-tl9eju6y/numpy/numpy/distutils/command/build_src.py", line 165, in build_sources self.build_extension_sources(ext) File "/mnt/tmp/pip-install-tl9eju6y/numpy/numpy/distutils/command/build_src.py", line 322, in build_extension_sources sources = self.generate_sources(sources, ext) File "/mnt/tmp/pip-install-tl9eju6y/numpy/numpy/distutils/command/build_src.py", line 375, in generate_sources source = func(extension, build_dir) File "numpy/core/setup.py", line 423, in generate_config_h moredefs, ignored = cocache.check_types(config_cmd, ext, build_dir) File "numpy/core/setup.py", line 47, in check_types out = check_types(*a, **kw) File "numpy/core/setup.py", line 281, in check_types "install {0}-dev|{0}-devel.".format(python)) SystemError: Cannot compile 'Python.h'. Perhaps you need to install python-dev|python-devel. ---------------------------------------- ERROR: Command errored out with exit status 1: /bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/mnt/tmp/pip-install-tl9eju6y/numpy/setup.py'"'"'; __file__='"'"'/mnt/tmp/pip-install-tl9eju6y/numpy/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /mnt/tmp/pip-record-paofd9vx/install-record.txt --single-version-externally-managed --prefix /mnt/tmp/pip-build-env-yy928eo_/overlay --compile --install-headers /mnt/tmp/pip-build-env-yy928eo_/overlay/include/python3.7m/numpy Check the logs for full command output. ---------------------------------------- ERROR: Command errored out with exit status 1: /bin/python3 /usr/lib/python3.7/site-packages/pip install --ignore-installed --no-user --prefix /mnt/tmp/pip-build-env-yy928eo_/overlay --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.org/simple -- 'cython >= 0.29' 'numpy==1.14.5; python_version<'"'"'3.7'"'"'' 'numpy==1.16.0; python_version>='"'"'3.7'"'"'' setuptools setuptools_scm wheel Check the logs for full command output. ```
0
answers
0
votes
2
views
asked an hour ago

EKS Node Not Ready

I have EKS cluster with 4 t.3large which has approx ~50 pods(small size). Often whenever I tried to update the application version from x to y and then few nodes goes into not ready state. Then I have to clean few resources and reboot the worker node and then situation back to normal. Any suggestions ? Logs from kube-proxy I0927 16:12:05.785853 1 proxier.go:790] "SyncProxyRules complete" elapsed="104.231873ms" I0927 16:18:27.078985 1 trace.go:205] Trace[1094698301]: "iptables ChainExists" (27-Sep-2022 16:16:36.489) (total time: 66869ms): Trace[1094698301]: [1m6.869976178s] [1m6.869976178s] END I0927 16:18:27.087821 1 trace.go:205] Trace[1957650533]: "iptables ChainExists" (27-Sep-2022 16:16:36.466) (total time: 67555ms): Trace[1957650533]: [1m7.555663612s] [1m7.555663612s] END I0927 16:18:27.124923 1 trace.go:205] Trace[460012371]: "DeltaFIFO Pop Process" ID:monitoring/prometheus-prometheus-node-exporter-gslfb,Depth:36,Reason:slow event handlers blocking the queue (27-Sep-2022 16:18:26.836) (total time: 186ms): Trace[460012371]: [186.190275ms] [186.190275ms] END W0927 16:18:27.248231 1 reflector.go:442] k8s.io/client-go/informers/factory.go:134: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding W0927 16:18:27.272469 1 reflector.go:442] k8s.io/client-go/informers/factory.go:134: watch of *v1.EndpointSlice ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding I0927 16:18:31.339045 1 trace.go:205] Trace[1140734081]: "DeltaFIFO Pop Process" ID:cuberun/cuberun-cuberun,Depth:42,Reason:slow event handlers blocking the queue (27-Sep-2022 16:18:30.696) (total time: 116ms): Trace[1140734081]: [116.029921ms] [116.029921ms] END I0927 16:18:32.403993 1 trace.go:205] Trace[903972463]: "DeltaFIFO Pop Process" ID:cuberundemo/cuberun-cuberundemo,Depth:41,Reason:slow event handlers blocking the queue (27-Sep-2022 16:18:31.657) (total time: 196ms): Trace[903972463]: [196.24798ms] [196.24798ms] END I0927 16:18:33.233172 1 trace.go:205] Trace[1265312678]: "DeltaFIFO Pop Process" ID:argocd/argocd-metrics,Depth:40,Reason:slow event handlers blocking the queue (27-Sep-2022 16:18:32.738) (total time: 359ms): Trace[1265312678]: [359.090093ms] [359.090093ms] END I0927 16:18:33.261077 1 proxier.go:823] "Syncing iptables rules" I0927 16:18:35.474678 1 proxier.go:790] "SyncProxyRules complete" elapsed="2.867637015s" I0927 16:18:35.587939 1 proxier.go:823] "Syncing iptables rules" I0927 16:18:37.014157 1 proxier.go:790] "SyncProxyRules complete" elapsed="1.45321438s" I0927 16:19:08.904513 1 trace.go:205] Trace[1753182031]: "iptables ChainExists" (27-Sep-2022 16:19:06.254) (total time: 2266ms): Trace[1753182031]: [2.266311394s] [2.266311394s] END I0927 16:19:08.904456 1 trace.go:205] Trace[228375231]: "iptables ChainExists" (27-Sep-2022 16:19:06.299) (total time: 2255ms): Trace[228375231]: [2.255433291s] [2.255433291s] END I0927 16:19:40.540864 1 trace.go:205] Trace[2069259157]: "iptables ChainExists" (27-Sep-2022 16:19:36.494) (total time: 3430ms): Trace[2069259157]: [3.430008597s] [3.430008597s] END I0927 16:19:40.540873 1 trace.go:205] Trace[757252858]: "iptables ChainExists" (27-Sep-2022 16:19:36.304) (total time: 3619ms): Trace[757252858]: [3.61980147s] [3.61980147s] END I0927 16:20:09.976580 1 trace.go:205] Trace[2070318544]: "iptables ChainExists" (27-Sep-2022 16:20:06.285) (total time: 3182ms): Trace[2070318544]: [3.182449365s] [3.182449365s] END I0927 16:20:09.976592 1 trace.go:205] Trace[852062251]: "iptables ChainExists" (27-Sep-2022 16:20:06.313) (total time: 3154ms): Trace[852062251]: [3.154369999s] [3.154369999s] END
0
answers
0
votes
13
views
asked 2 days ago

Django Daphne Websocket Access Denied

We need to establish a "Web socket connection" to our AWS servers using Django, Django channels, Redis, and Daphne Nginx Config. Currently local and on-premises config is configured properly and needs help in configuring the same communication with the staging server. We tried adding the above config to our servers but got an error of access denied with response code 403 from the server for web socket request. below is the **Nginx config** for staging ``` server { listen 80; server_name domain_name.com domain_name_2.com; root /var/www/services/project_name_frontend/; index index.html; location ~ ^/api/ { rewrite ^/api/(.*) /$1 break; proxy_pass http://unix:/var/www/services/enerlly_backend/backend/backend.sock; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; proxy_read_timeout 30; proxy_connect_timeout 30; proxy_send_timeout 30; send_timeout 30; proxy_redirect ~^/(.*) $scheme://$host/api/$1; } location /ws { try_files $uri @proxy_to_ws; } location @proxy_to_ws { proxy_pass http://127.0.0.1:8001; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_redirect off; } location ~ ^/admin/ { proxy_pass http://unix:/var/www/services/project_name_backend/backend/backend.sock; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; proxy_read_timeout 30; proxy_connect_timeout 30; proxy_send_timeout 30; send_timeout 30; proxy_redirect off; } location /staticfiles/ { alias /var/www/services/project_name_backend/backend/staticfiles/; } location /mediafiles/ { alias /var/www/services/project_name_backend/backend/mediafiles/; } location / { try_files $uri /index.html; } } ``` and **Systemctl service** to execute Django Daphne service ``` [Unit] Description=Backend Project Django WebSocket daemon After=network.target [Service] User=root Group=www-data WorkingDirectory=/var/www/services/project_name_backend ExecStart=/home/ubuntu/project_python_venv/bin/python /home/ubuntu/project_python_venv/bin/daphne -b 0.0.0.0 -p 8001 project_name_backend.prod_asgi:application [Install] WantedBy=multi-user.target ``` **Below is the Load Balancer security group config inbound rules** ![Enter image description here](/media/postImages/original/IMN2LT2BlTSmK0PHEAu5dwHQ) **Listner Config for Load Balancer** ![Enter image description here](/media/postImages/original/IMxBGKpaJOSrSsOQyn5FEt-Q) ![Enter image description here](/media/postImages/original/IMktSIYK0ZSOy8GzYyR-DI_w)
0
answers
0
votes
8
views
asked 3 days ago

Not using "noexec" with "/run" mount, on EC2 Ubuntu 22.04.1 LTS

I believe this *might* be a security issue, as [this happened in 2014](https://www.tenable.com/plugins/nessus/73180), but would rather not pay $29 for "Premium Support". It looks like the `initramfs` is not always mounting the `/run` partition as `noexec`. A stock `Ubuntu 22.04` install shows the `noexec` mount option is present ([source](https://askubuntu.com/a/1432445/924107)), so I suspect one of the AWS modifications has affected this? I can check four EC2 servers that are running `Ubuntu 22.04.1 LTS`, three of them upgraded from `Ubuntu 20.04.5`, the other started new a few weeks ago... oddly, two of the upgraded servers have kept the `noexec`. ``` # New server # Launched: Sep 02 2022 # AMI name: ubuntu/images/hvm-ssd/ubuntu-jammy-22.04-amd64-server-20220609 mount | grep '/run ' tmpfs on /run type tmpfs (rw,nosuid,nodev,size=803020k,nr_inodes=819200,mode=755,inode64) uname -a Linux HostB 5.15.0-1020-aws #24-Ubuntu SMP Thu Sep 1 16:04:17 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux ``` ``` # Upgraded server # Launched: Apr 25 2022 # AMI name: ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20211129 mount | grep '/run ' tmpfs on /run type tmpfs (rw,nosuid,nodev,size=94812k,nr_inodes=819200,mode=755,inode64) uname -a Linux HostA 5.15.0-1020-aws #24-Ubuntu SMP Thu Sep 1 16:04:17 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux ``` ``` # Upgraded server # Launched: Nov 16 2021 # AMI name: ubuntu/images/hvm-ssd/ubuntu-xenial-16.04-amd64-server-20180522 mount | grep '/run ' tmpfs on /run type tmpfs (rw,nosuid,nodev,noexec,relatime,size=47408k,mode=755,inode64) uname -a Linux HostC 5.15.0-1020-aws #24-Ubuntu SMP Thu Sep 1 16:04:17 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux ``` ``` # Upgraded server # Launched: Feb 10 2017 # AMI name: ubuntu/images/hvm-ssd/ubuntu-xenial-16.04-amd64-server-20170113 mount | grep '/run ' tmpfs on /run type tmpfs (rw,nosuid,nodev,noexec,relatime,size=202012k,mode=755,inode64) uname -a Linux HostD 5.15.0-48-generic #54-Ubuntu SMP Fri Aug 26 13:26:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux ``` --- Update 2022-09-28: Thanks to [Andrew Lowther](https://askubuntu.com/a/1432445/924107), it looks like a **temporary** work around is to use the details in this [initramfs does not get loaded](https://bugs.launchpad.net/ubuntu/+source/livecd-rootfs/+bug/1870189) bug report: mv /etc/default/grub.d/40-force-partuuid.cfg{,.bak}; update-grub;
0
answers
0
votes
27
views
asked 3 days ago

What to look at for resolving Nice DCV 404 errors

I've got an EC2 instance setup with Nice DCV. I have setup port access in my security rules and created a session in nice dcv. However, whenever I try to connect to the session via the browsed, I get an HTTP ERROR 404. I can't seem to find any information in the Nice DCV docs about causes of 404 except for the session resolver which I'm not using. How can I go about resolving this issue? Below is the output from dcv list-sessions -j ``` [ { "id" : "cloud9-session", "owner" : "ubuntu", "num-of-connections" : 0, "creation-time" : "2022-09-23T12:58:40.919860Z", "last-disconnection-time" : "", "licenses" : [ { "product" : "dcv", "status" : "licensed", "check-timestamp" : "2022-09-23T12:58:42.540422Z", "expiration-date" : "" }, { "product" : "dcv-gl", "status" : "licensed", "check-timestamp" : "2022-09-23T12:58:42.540422Z", "expiration-date" : "" } ], "licensing-mode" : "EC2", "storage-root" : "", "type" : "virtual", "status" : "running", "x11-display" : ":0", "x11-authority" : "/run/user/1000/dcv/cloud9-session.xauth", "display-layout" : [ { "width" : 800, "height" : 600, "x" : 0, "y" : 0 } ] } ] ``` This is the output from dcv get-config ``` [connectivity] web-use-https = false web-port = 8080 web-extra-http-headers = [('test-header', 'test-value')] [security] authentication = 'none' ``` This is the output from systemctl status dcvserver ``` ● dcvserver.service - NICE DCV server daemon Loaded: loaded (/lib/systemd/system/dcvserver.service; enabled; vendor preset: enable> Active: active (running) since Fri 2022-09-23 12:58:40 UTC; 18min ago Main PID: 715 (dcvserver) Tasks: 6 (limit: 76196) Memory: 39.9M CGroup: /system.slice/dcvserver.service ├─715 /bin/bash /usr/bin/dcvserver -d --service └─724 /usr/lib/x86_64-linux-gnu/dcv/dcvserver --service Sep 23 12:58:40 ip-10-0-0-115 systemd[1]: Starting NICE DCV server daemon... Sep 23 12:58:40 ip-10-0-0-115 systemd[1]: Started NICE DCV server daemon. ``` I'm trying to access the page with http://<public ip>:8080 I've also tried including the #session_id part in the url and using the windows client with no luck. My operating system is Ubuntu 20.04 with a custom AMI running in a g4dn.4xlarge machine.
0
answers
0
votes
26
views
asked 7 days ago