AWS Parallel cluster compute nodes failing to start properly

0

Hello, I am a new parallelCluster 2.11 user and am having an issue where my compute nodes fail to spin up properly resulting in the eventual failure of pcluster create. Here is my config file:
Note: I replaced square brackets with curly braces because aws forums recognizes square brackets as links
{aws}
aws_region_name = us-east-1

{aliases}
ssh = ssh {CFN_USER}@{MASTER_IP} {ARGS}

{global}
cluster_template = default
update_check = true
sanity_check = true

{cluster default}
key_name = <my-keypair>
scheduler = slurm
master_instance_type = c5n.2xlarge
base_os = centos7
vpc_settings = default
queue_settings = compute
master_root_volume_size = 1000
compute_root_volume_size = 35

{vpc default}
vpc_id = <my-default-vpc-id>
master_subnet_id = <my-subneta>
compute_subnet_id = <my-subnetb>
use_public_ips = false

{queue compute}
enable_efa = true
compute_resource_settings = default
compute_type = ondemand
placement_group = DYNAMIC
disable_hyperthreading = true

{compute_resource default}
instance_type = c5n.18xlarge
initial_count = 1
min_count = 1
max_count = 32

{ebs shared}
shared_dir = shared
volume_type = st1
volume_size = 500

When I run pcluster create I get the following error after ~15 min:
The following resource(s) failed to create: MasterServer.
- AWS::EC2::Instance MasterServer Failed to receive 1 resource signal(s) within the specified duration

If I log into the master node before the failure above I see the following in the /var/log/parallelcluster/clustermgtd log file:
2021-09-28 15:42:41,168 - slurm_plugin.clustermgtd:_maintain_nodes - INFO - Found the following unhealthy static nodes: (x1) 'compute-st-c5n18xlarge-1(compute-st-c5n18xlarge-1)'
2021-09-28 15:42:41,168 - slurm_plugin.clustermgtd:_handle_unhealthy_static_nodes - INFO - Setting unhealthy static nodes to DOWN

However, despite setting the node to DOWN, the ec2 compute instance continues to stay in the running state and the above log continually emits the following message:

2021-09-28 15:54:41,156 - slurm_plugin.clustermgtd:_maintain_nodes - INFO - Following nodes are currently in replacement: (x1) 'compute-st-c5n18xlarge-1'

This state persists until the pcluster create command fails with the error noted above. I suspect there is something wrong with my configuration -- any help or further troubleshooting advice would be appreciated.

Edited by: notknottheory on Sep 28, 2021 9:19 AM

asked 3 years ago414 views
2 Answers
0

I was originally using two public subnets: one for the head node and one for the compute nodes. Switching the compute nodes to a private subnet solved the problem. Alternatively, not specifying a compute subnet and setting assign_public_ips to true also solved the problem.

After these steps the compute nodes spun up successfully and I was able to run my jobs through slurm.

answered 3 years ago
0

I was originally using two public subnets: one for the head node and one for the compute nodes. Switching the compute nodes to a private subnet solved the problem. Alternatively, not specifying a compute subnet and setting assign_public_ips to true also solved the problem.

After these steps the compute nodes spun up successfully and I was able to run my jobs through slurm.

answered 3 years ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions