AWS Parallel cluster compute nodes failing to start properly

0

Hello, I am a new parallelCluster 2.11 user and am having an issue where my compute nodes fail to spin up properly resulting in the eventual failure of pcluster create. Here is my config file:
Note: I replaced square brackets with curly braces because aws forums recognizes square brackets as links
{aws}
aws_region_name = us-east-1

{aliases}
ssh = ssh {CFN_USER}@{MASTER_IP} {ARGS}

{global}
cluster_template = default
update_check = true
sanity_check = true

{cluster default}
key_name = <my-keypair>
scheduler = slurm
master_instance_type = c5n.2xlarge
base_os = centos7
vpc_settings = default
queue_settings = compute
master_root_volume_size = 1000
compute_root_volume_size = 35

{vpc default}
vpc_id = <my-default-vpc-id>
master_subnet_id = <my-subneta>
compute_subnet_id = <my-subnetb>
use_public_ips = false

{queue compute}
enable_efa = true
compute_resource_settings = default
compute_type = ondemand
placement_group = DYNAMIC
disable_hyperthreading = true

{compute_resource default}
instance_type = c5n.18xlarge
initial_count = 1
min_count = 1
max_count = 32

{ebs shared}
shared_dir = shared
volume_type = st1
volume_size = 500

When I run pcluster create I get the following error after ~15 min:
The following resource(s) failed to create: MasterServer.
- AWS::EC2::Instance MasterServer Failed to receive 1 resource signal(s) within the specified duration

If I log into the master node before the failure above I see the following in the /var/log/parallelcluster/clustermgtd log file:
2021-09-28 15:42:41,168 - slurm_plugin.clustermgtd:_maintain_nodes - INFO - Found the following unhealthy static nodes: (x1) 'compute-st-c5n18xlarge-1(compute-st-c5n18xlarge-1)'
2021-09-28 15:42:41,168 - slurm_plugin.clustermgtd:_handle_unhealthy_static_nodes - INFO - Setting unhealthy static nodes to DOWN

However, despite setting the node to DOWN, the ec2 compute instance continues to stay in the running state and the above log continually emits the following message:

2021-09-28 15:54:41,156 - slurm_plugin.clustermgtd:_maintain_nodes - INFO - Following nodes are currently in replacement: (x1) 'compute-st-c5n18xlarge-1'

This state persists until the pcluster create command fails with the error noted above. I suspect there is something wrong with my configuration -- any help or further troubleshooting advice would be appreciated.

Edited by: notknottheory on Sep 28, 2021 9:19 AM

已提問 3 年前檢視次數 426 次
2 個答案
0

I was originally using two public subnets: one for the head node and one for the compute nodes. Switching the compute nodes to a private subnet solved the problem. Alternatively, not specifying a compute subnet and setting assign_public_ips to true also solved the problem.

After these steps the compute nodes spun up successfully and I was able to run my jobs through slurm.

已回答 3 年前
0

I was originally using two public subnets: one for the head node and one for the compute nodes. Switching the compute nodes to a private subnet solved the problem. Alternatively, not specifying a compute subnet and setting assign_public_ips to true also solved the problem.

After these steps the compute nodes spun up successfully and I was able to run my jobs through slurm.

已回答 3 年前

您尚未登入。 登入 去張貼答案。

一個好的回答可以清楚地回答問題並提供建設性的意見回饋,同時有助於提問者的專業成長。

回答問題指南