AWS instance, Failed to start LSB: Bring up/down networking error message

0

I am getting this error but I am not able to SSH into the instance. I have already restarted it but still can't ssh into the instance. What should I do?

The steps to replicate this are:

  1. We create an AMI

  2. We create an instance based on that AMI

  3. We get reachability failed, status impaired error

  4. System log says: Failed to start LSB: Bring up/down networking and Started Crash recovery kernel arming .

Thanks in advance

2개 답변
0

I would start by troubleshooting using the EC2 Serial Console - try and determine what drivers or modules are not loading correctly and causing connectivity issues.

If you're starting from a public AMI, launch an instance based on that AMI and make sure that it boots correctly; then make changes one-by-one until you get to the change that causes the error messages you're seeing.

profile pictureAWS
전문가
답변함 2년 전
  • Thanks for answering. I am not starting from a public AMI. How would you determine what drivers or modules are not loading correctly and causing connectivity issues?

  • My instance type is not built on the AWS Nitro System. It seems it does not support EC2 serial console. What am I missing?

  • If possible, use an AMI that supports Nitro - it gives you access to far more modern instance types. If that's not possible then the next best thing is to enable as much logging as possible; then when the launch fails, detach the EBS volume; attach it to a working instance and go through the logs.

0

This is the solution

    1. Stop the impaired instance and detach the root volume.
    2. Attach the root volume on another rescue instance (which is running in the same availability zone).
    3. Check the disk/volume attached and mount it.

            $ sudo lsblk
            $ sudo lsblk -f
            $ mkdir /rescue
            $ mount /dev/xvdf1 /rescue

    4. Mount the required pseudo filesystems and chroot into the environment.

            $ for i in proc sys dev run; do mount --bind /$i /rescue/$i ; done
            $ chroot /rescue

    5. Check the cloud-init configurations and the cloud-init package, if it is installed.

            $ ls -l /etc/cloud/
            $ sudo rpm -qa | grep cloud-init
            $ sudo yum install cloud-init


    6. Exit from the chroot environment and unmount the filesystems.

            $ exit
            $ for i in proc sys dev run; do umount /rescue/$i ; done
            $ umount /rescue

    8. Detach the root volume from the rescue instance and attach it to the original instance.

    9. Start the instance.
답변함 2년 전

로그인하지 않았습니다. 로그인해야 답변을 게시할 수 있습니다.

좋은 답변은 질문에 명확하게 답하고 건설적인 피드백을 제공하며 질문자의 전문적인 성장을 장려합니다.

질문 답변하기에 대한 가이드라인

관련 콘텐츠