AWS instance, Failed to start LSB: Bring up/down networking error message

0

I am getting this error but I am not able to SSH into the instance. I have already restarted it but still can't ssh into the instance. What should I do?

The steps to replicate this are:

  1. We create an AMI

  2. We create an instance based on that AMI

  3. We get reachability failed, status impaired error

  4. System log says: Failed to start LSB: Bring up/down networking and Started Crash recovery kernel arming .

Thanks in advance

2回答
0

I would start by troubleshooting using the EC2 Serial Console - try and determine what drivers or modules are not loading correctly and causing connectivity issues.

If you're starting from a public AMI, launch an instance based on that AMI and make sure that it boots correctly; then make changes one-by-one until you get to the change that causes the error messages you're seeing.

profile pictureAWS
エキスパート
回答済み 2年前
  • Thanks for answering. I am not starting from a public AMI. How would you determine what drivers or modules are not loading correctly and causing connectivity issues?

  • My instance type is not built on the AWS Nitro System. It seems it does not support EC2 serial console. What am I missing?

  • If possible, use an AMI that supports Nitro - it gives you access to far more modern instance types. If that's not possible then the next best thing is to enable as much logging as possible; then when the launch fails, detach the EBS volume; attach it to a working instance and go through the logs.

0

This is the solution

    1. Stop the impaired instance and detach the root volume.
    2. Attach the root volume on another rescue instance (which is running in the same availability zone).
    3. Check the disk/volume attached and mount it.

            $ sudo lsblk
            $ sudo lsblk -f
            $ mkdir /rescue
            $ mount /dev/xvdf1 /rescue

    4. Mount the required pseudo filesystems and chroot into the environment.

            $ for i in proc sys dev run; do mount --bind /$i /rescue/$i ; done
            $ chroot /rescue

    5. Check the cloud-init configurations and the cloud-init package, if it is installed.

            $ ls -l /etc/cloud/
            $ sudo rpm -qa | grep cloud-init
            $ sudo yum install cloud-init


    6. Exit from the chroot environment and unmount the filesystems.

            $ exit
            $ for i in proc sys dev run; do umount /rescue/$i ; done
            $ umount /rescue

    8. Detach the root volume from the rescue instance and attach it to the original instance.

    9. Start the instance.
回答済み 2年前

ログインしていません。 ログイン 回答を投稿する。

優れた回答とは、質問に明確に答え、建設的なフィードバックを提供し、質問者の専門分野におけるスキルの向上を促すものです。

質問に答えるためのガイドライン

関連するコンテンツ