can’t ssh to ec2 ubuntu instance, /etc/fstab breaks bootup due to missing ebs volume [SOLVED]

Screen Shot 2013-08-21 at 12.08.04 PM

So the /etc/fstab file on your root volume looked like this

LABEL=cloudimg-rootfs / ext4 defaults 0 0
/dev/xvdf /mnt/backups auto defaults,comment=cloudconfig 0 2

by mistake you deleted the ebs volume that you had mounted on /mnt/backups (or whatever folder) and you restarted your ubuntu instance not knowing that if the /etc/fstab would break it would not continue to start all the application layer networking services like ssh on port 22…

you can ping the machine, but you can’t ssh, amazon support won’t respond or will tell you to fuck yourself.

you learn that ubuntu has had this bug for a while, but it’s been addressed by passing your volume configuration a nobootwait option.

you wish your /etc/fstab looked like this, but you can’t get in, amazon doesn’t give you any other options from their console to go in and solve the problem through a console…

LABEL=cloudimg-rootfs / ext4 defaults 0 0
/dev/xvdf /mnt/backups auto defaults,nobootwait,comment=cloudconfig 0 2

No worries, I have a fix that will let you edit that file, and boot back and try to recover things, you may have lost that ebs volume, but you won’t have to setup this computer again.

1. Make a snapshot of the root volume on that instance. This will take a while.
2. Make a new ebs volume of that snapshot and put it on the zone where the ec2 instance lives.
3. Create an identical temporary new ec2 instance on the same zone.
4. Attach the snapshot volume you created on step 2 to the new instance.
5. ssh to the new machine.
6. sudo fdisk -l, you should see all the attached devices, you will see something like this referring to the attached ebs

Disk /dev/xvdf: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders, total 16777216 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/xvdf doesn't contain a valid partition table

Don’t listen to that last message, you do have a valid partition.

7. Create a folder where to mount the disk. sudo mkdir /mnt/old-volume
8. Mount it sudo mount -t auto /dev/xvdf /mnt/old-volume
9. Get into /mnt/old-volume/etc/fstab and fix it.
10. Unmount /mnt/old-volume, turn off the instance, detach the repaired volume.
11. Turn off the original instance, detach the broken root volume (at /dev/sda1)
12. Attach the repaired volume to the original instance under /dev/sda1
13. Start the original instance.
14. ssh to it. (it will have a new ip address, make sure to update your DNS or load balancing entries)
15. Terminate the temporary instance and all the volumes that you won’t need.
16. Get to work.
17. Leave a tip below. 😉

Leave a Reply

Your email address will not be published. Required fields are marked *