AWS instance, Failed to start LSB: Bring up/down networking error message

0

I am getting this error but I am not able to SSH into the instance. I have already restarted it but still can't ssh into the instance. What should I do?

The steps to replicate this are:

  1. We create an AMI

  2. We create an instance based on that AMI

  3. We get reachability failed, status impaired error

  4. System log says: Failed to start LSB: Bring up/down networking and Started Crash recovery kernel arming .

Thanks in advance

2 Answers
0

I would start by troubleshooting using the EC2 Serial Console - try and determine what drivers or modules are not loading correctly and causing connectivity issues.

If you're starting from a public AMI, launch an instance based on that AMI and make sure that it boots correctly; then make changes one-by-one until you get to the change that causes the error messages you're seeing.

profile pictureAWS
EXPERT
answered 2 years ago
  • Thanks for answering. I am not starting from a public AMI. How would you determine what drivers or modules are not loading correctly and causing connectivity issues?

  • My instance type is not built on the AWS Nitro System. It seems it does not support EC2 serial console. What am I missing?

  • If possible, use an AMI that supports Nitro - it gives you access to far more modern instance types. If that's not possible then the next best thing is to enable as much logging as possible; then when the launch fails, detach the EBS volume; attach it to a working instance and go through the logs.

0

This is the solution

    1. Stop the impaired instance and detach the root volume.
    2. Attach the root volume on another rescue instance (which is running in the same availability zone).
    3. Check the disk/volume attached and mount it.

            $ sudo lsblk
            $ sudo lsblk -f
            $ mkdir /rescue
            $ mount /dev/xvdf1 /rescue

    4. Mount the required pseudo filesystems and chroot into the environment.

            $ for i in proc sys dev run; do mount --bind /$i /rescue/$i ; done
            $ chroot /rescue

    5. Check the cloud-init configurations and the cloud-init package, if it is installed.

            $ ls -l /etc/cloud/
            $ sudo rpm -qa | grep cloud-init
            $ sudo yum install cloud-init


    6. Exit from the chroot environment and unmount the filesystems.

            $ exit
            $ for i in proc sys dev run; do umount /rescue/$i ; done
            $ umount /rescue

    8. Detach the root volume from the rescue instance and attach it to the original instance.

    9. Start the instance.
answered 2 years ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions