- Newest
- Most votes
- Most comments
Once the instance has booted can you check that the user data script is created correctly? You can see this via the Console or by logging on to the instance in /var/lib/cloud/instances/i-*.
It looks like there are some '/n' missing from the script above.
If you find the user-data.txt file in /var, copy to /tmp as user-data.sh, chmod 755, then run it by hand to see if there are errors.
If you have only two block devices then the second device is always going to be the last disk device row of the lsblk
output. So extract the name by chopping up the output of the command, e.g.
$ SECOND_DISK=/dev/`lsblk -l | grep -w disk | tail -1 | awk '{print $1}'`
$ echo $SECOND_DISK
/dev/sdb
Then use this variable in the commands to create and mount the volume, updating the fstab, and so on.
Thanks, but I have one question: The "lsblk" output order, per the man pages, doesn't seem to agree on the "order" of output. Why do you say that it will always be the drive I'm looking for?
The default output, as well as the default output from options like --fs and --topology, is subject to change. So whenever possible, you should avoid using default outputs in your scripts. Always explicitly define expected columns by using --output columns-list and --list in environments where a stable output is required.
I ran this 3 times and it worked, the next 2 times it didn't (the nvme?n1 was already "taken"). So it seems like it isn't always the last drive listed. Any thoughts on this? I was hoping to map the device setting in the cft to the nvme?n1, but it doesn't seem to be possible. I'll try -o the output and sort it somehow.
Well, I'm posting what I went with, though I'm still not happy as there is still no "linkage" between the CFT's Volume Device name ("/dev/xvdb") and the "nvme?n1" disk in the UserData. It's attached so there should be a way to definitively link the two. However, by using the previous two comments, I was able to craft the following UserData bash code to "more often than not" mount the attached EBS volume. Maybe this will help somebody, or get them closer to the actual answer. I left in the "debugging" echo's as they were helpful to show what was being done (as per the first answer). The main problem is that the "lsblk" call is a best guess as to which disk I'll get. And my EBS Root Volume Size "cannot/should not" be the same size as my EBS Data Volume Size:
"UserData": { "Fn::Base64" : { "Fn::Join" : ["", [
"#!/bin/bash\n",
"echo Used for debugging - can be deleted at any time >> /build.txt\n",
"mkdir /data\n",
"DATA_DISK=$(lsblk -l | grep -w disk | grep ",
{ "Ref" : "DataVolumeSize" },
" | tail -1 | awk '{print $1}')\n",
"echo $(lsblk -l) >> /build.txt\n",
"echo DATA_DISK = $DATA_DISK >> /build.txt\n",
"mkfs -t ext4 /dev/$DATA_DISK\n",
"echo $(blkid -o export /dev/$DATA_DISK | grep ^UUID=) /data ext4 defaults,nofail 0 2 >> /etc/fstab\n",
"echo $(blkid -o export /dev/$DATA_DISK | grep ^UUID=) /data ext4 defaults,nofail 0 2 >> /build.txt\n",
"echo blkid= $(blkid -o export /dev/$DATA_DISK) >> /build.txt\n",
"mount -a\n",
"echo $(cat /etc/fstab) >> /build.txt\n",
...
P.S. I saw (https://github.com/binxio/ec2-boot-mount-ebs-volume) where someone was using:
"mkdir /data\n",
"blkid $(readlink -f /dev/xvdb) || mkfs -t ext4 $(readlink -f /dev/xvdb)\n",
"e2label $(readlink -f /dev/xvdb) xxx-data\n",
"sed -e '/^[\/][^ \t]*[ \t]*\/data[ \t]/d' /etc/fstab\n",
"grep -q ^LABEL=xxx-data /etc/fstab || echo 'LABEL=xxx-data /data ext4 defaults' >> /etc/fstab\n",
"grep -q \"^$(readlink -f /dev/xvdb) /data \" /proc/mounts || mount /data\n",
It never worked for me and seemed to always return a blank. BTW, I'm on Ubuntu, so maybe that has something to do with it.
Relevant content
- Accepted Answerasked a year ago
- asked 10 months ago
- Accepted Answerasked 2 months ago
- AWS OFFICIALUpdated 3 months ago
- AWS OFFICIALUpdated 3 months ago
- AWS OFFICIALUpdated a year ago
Didn't solve the issue, but great information. I didn't know about the "copy" of the user-data.txt and that's extremely useful - very helpful for debugging. I did correct the missing '/n's, but that didn't seem to be the issue.