Questions tagged with High Performance Compute

Content language: English

Sort by most recent

Browse through the questions and answers listed below or filter and sort to narrow down your results.

How to run JVM based HFT application on Graviton 3 CPUs

Thinking of creating a High Frequency Transaction Trading system to test my Algo Trading Strategies. Infrastructure will be AWS EC2 Graviton 3 instance (C7g) + AWS Linux2 + AWS Corretto JVM runtime OR OpenJDK GraalVM distribution. Why Graalvm because its Polyglot and reduces the contextual switch (Marshaling/Un-Marshaling ) between Data Structures of different programing languages. AWS EC2 Graviton 3 coming with more than 100 virtual cores and more than 100 MB of L1 + L2 + L3 cache. Pre-compiled native ARM CPU instructions will be saved in the CodeCache. Data Analytics will be done by Apache Spark 3 and code will JIT aware (Mostly in Scala and R). Data will be populated from SSDs and processed in RAM. Questions 1. Will Amazon Corretto or GraalVM capable of generating executable native instructions and interpreting towards ARM based Graviton CPU. 2. Amazon Corretto is a flavor of OpenJDK. Does the Project GraalVM already merged with Amazon Corretto JVM. Can I replace **Java on Truffle - Mete Circular JIT** as C2 compiler of Amazon Corretto JVM. 3. Where I can refer guides or whitepapers related to Amazon Corretto supporting OpenJDK JEPs and projects. 4. Which super super quick programing language as a choice to write my Algo trading business logic. Expecting a nanosecond latency from the time of a signal enters the Ethernet port of a microprocessor and returns back the result. Better Suggestions and questions are always appreciated.
0
answers
0
votes
40
views
fasil
asked 10 months ago

DLAMI does not have CUDA/NVIDIA (and cannot access cuda from pytorch)

I running on :Deep Learning AMI (Ubuntu 18.04) Version 56.0 - ami-083abc80c473f5d88, but I have tried several similar DLAMI. I am unable to access CUDA from pytorch to train my models. See here: ``` $ apt list --installed | grep -i "nvidia" ``` ``` WARNING: apt does not have a stable CLI interface. Use with caution in scripts. libnvidia-compute-460-server/bionic-updates,bionic-security,now 460.106.00-0ubuntu0.18.04.2 amd64 [installed,automatic] libnvidia-container-tools/bionic,now 1.7.0-1 amd64 [installed,automatic] libnvidia-container1/bionic,now 1.7.0-1 amd64 [installed,automatic] nvidia-container-toolkit/bionic,now 1.7.0-1 amd64 [installed] nvidia-cuda-dev/bionic,now 9.1.85-3ubuntu1 amd64 [installed,automatic] nvidia-cuda-doc/bionic,now 9.1.85-3ubuntu1 all [installed,automatic] nvidia-cuda-gdb/bionic,now 9.1.85-3ubuntu1 amd64 [installed,automatic] nvidia-cuda-toolkit/bionic,now 9.1.85-3ubuntu1 amd64 [installed] nvidia-docker2/bionic,now 2.8.0-1 all [installed] nvidia-fabricmanager-450/now 450.142.00-1 amd64 [installed,upgradable to: 450.156.00-0ubuntu0.18.04.1] nvidia-opencl-dev/bionic,now 9.1.85-3ubuntu1 amd64 [installed,automatic] nvidia-profiler/bionic,now 9.1.85-3ubuntu1 amd64 [installed,automatic] nvidia-visual-profiler/bionic,now 9.1.85-3ubuntu1 amd64 [installed,automatic] ``` And it shows I have Nvidia. However, when I run python: ``` ~$ bpython bpython version 0.22.1 on top of Python 3.8.12 /home/ubuntu/anaconda3/envs/pytorch_p38/bin/python3.8 >>> import torch.nn as nn >>> import torch >>> torch.cuda.is_available() False ``` Even after I re-install nvidia ``` sudo apt install nvidia-driver-455 ``` I get this: ``` (pytorch_p38) ubuntu@ip-172-31-95-17:~$ nvcc --version nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2020 NVIDIA Corporation Built on Mon_Oct_12_20:09:46_PDT_2020 Cuda compilation tools, release 11.1, V11.1.105 Build cuda_11.1.TC455_06.29190527_0 (pytorch_p38) ubuntu@ip-172-31-95-17:~$ bpython bpython version 0.22.1 on top of Python 3.8.12 /home/ubuntu/anaconda3/envs/pytorch_p38/bin/python3.8 >>> import torch >>> torch.cuda.is_available() False ``` Does anyone know how to get pytorch to be able to access cuda? Any help is greatly appreciated
0
answers
0
votes
54
views
asked 10 months ago