Failed to start Preprocess NFS configuration

0

Hello, my instance refuse to boot up with below messages.

Let me know how can i resolved this issue without losing data in the instance. Seems like NFS is not working but the instance is unreachable through SSH. Thanks in advance

[FAILED] Failed to start Preprocess NFS configuration.See ‘systemctl status nfs-config.service’ for details. [ 6.527025] audit: type=1130 audit(1630350328.112:63): pid=1 uid=0 auid=42949---------------+67295 ses=4294967295 msg=‘unit=nfs-config comm=“systemd” exe=“/usr/lib/systemd/systemd” hostname=? addr=? terminal=? res=failed’ [ 6.552844] audit: audit_lost=2 audit_rate_limit=0 audit_backlog_limit=64 [ 6.563373] audit: kauditd hold queue overflow [ 6.679619] audit: type=1130 audit(1630350328.380:64): pid=1 uid=0 auid=4294967295 ses=4294967295 msg=‘unit=plymouth-read-write comm=“systemd” exe=“/usr/lib/systemd/systemd” hostname=? addr=? terminal=? res=success’

ymzz
asked 10 months ago282 views
2 Answers
0
Accepted Answer

Hello. I understand that you are facing issues that the instance is not able to boot and you would like to know how can the issue be resolved.

[FAILED] Failed to start Preprocess NFS configuration.See 'systemctl status nfs-config.service' for details.
[ 6.527025] audit: type=1130 audit(1630350328.112:63): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=nfs-config comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed'
[ 6.552844] audit: audit_lost=2 audit_rate_limit=0 audit_backlog_limit=64
[ 6.563373] audit: kauditd hold queue overflow
[ 6.679619] audit: type=1130 audit(1630350328.380:64): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=plymouth-read-write comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'

I suspect that the issue you are facing might be due to configuration saved in /etc/fstab and the NFS volume cannot be mount onto system. To further troubleshoot the issue, Please perform below and let me know. Before proceeding making an AMI or Snapshot of the instance is highly recommended. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/creating-an-ami-ebs.html

  1. Launch a new EC2 instance in your virtual private cloud (VPC) using the same Amazon Machine Image (AMI) and in the same Availability Zone as the impaired instance. The new instance becomes your rescue instance. Or, you can use an existing instance that you can access, if it uses the same AMI and is in the same Availability Zone as your impaired instance

  2. Stop the impaired instance.

3.Detach the Amazon Elastic Block Store (Amazon EBS) root volume (/dev/sda1) from your impaired instance . * Note the device name (/dev/xvda or /dev/sda1) of your root volume.

4.Attach the EBS volume as a secondary device ( /dev/sdf) to the rescue instance.

  1. Connect to your rescue instance using SSH.

  2. Create a mount point directory (/rescue) for the new volume attached to the rescue instance. $ sudo mkdir /rescue

  3. Mount the volume at the directory that you created in step 6. $ sudo mount /dev/sdf1 /rescue * Note: The device (/dev/sdf1) might be attached to the rescue instance with a different device name. Use the lsblk command to view your available disk devices, along with their mount points, to determine the correct device names.

  4. Run #sudo vi /rescue/etc/fstab and then revise the automount configuration as per your need. In this case you would need to delete /dev/xvdg or /dev/sdg mount option.

  5. Run the unmount command to unmount the secondary device from your rescue instance. $ sudo umount /rescue If the unmount operation isn't successful, you might have to stop or reboot the rescue instance to enable a clean unmount.

  6. Detach the secondary volume (/dev/sdf) from the rescue instance. Then, attach it to the original instance as /dev/sda (root volume).

  7. Start the instance and then verify if the instance is responsive. Please kindly refer below documentation for recommended NFS mount options. [3] https://docs.aws.amazon.com/efs/latest/ug/mounting-fs-nfs-mount-settings.html

Please note that, If your EC2 instance needs to start regardless of the status of your mounted EFS file system, add the nofail option to your file system's entry in your /etc/fstab file.

Have a great day ahead!

References: https://docs.aws.amazon.com/efs/latest/ug/mounting-fs-nfs-mount-settings.html

AWS
East
answered 10 months ago
profile picture
EXPERT
reviewed 10 months ago
0

As per the following document, it would be better to create a rescue EC2 instance and directly edit the file in the root EBS of the EC2 where the problem is occurring.
Please review the following documents and troubleshoot in a way that fits your environment.
https://repost.aws/knowledge-center/ec2-linux-emergency-mode

profile picture
EXPERT
answered 10 months ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions