When trying to mount can XFS filestsen on an AWS instance, I got the error “mount: wrong fs type, bad option, bad superblock on /dev/sdh” Examine the volume’s UUID with the xfs_db command: shell> sudo xfs_db -c uuid /dev/nvme2n1 To fix the problem, you have two options… Temporary Solution Add nouuid mount option to temporarily […]
First, use the AWS Console to modify the volume to the desired size, in our example we want to go from 10GB to 25GB for the root filesystem For a Xen ext4 root volume
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/recognize-expanded-volume-linux.html This only works on the new expandable volumes. YMMV, as always. 1. Examine file -s /dev/xvd* lsblk df -h 2. Grow partition Expand the modified partition using growpart (and note the unusual syntax of separating the device name from the partition number): growpart /dev/xvda 1 lsblk 3. Expand filesystem resize2fs /dev/xvda1 df -h
I had to replace an ailing root volume on AWS, so I decided to double the size when I created the new volume from snapshot. After booting, I realized that df still showed the old filesystem size of 10GB, not the new size of 20GB Here is the solution: resize2fs /dev/xvda1
First, use the AWS management console to create and attach a new volume. Note the device name, in our example /dev/sdf.
1
2
3
4
5
6
7
# yum install xfsprogs-devel
# mkfs.xfs /dev/sdf
# echo "/dev/sdf /data xfs noatime 0 0" | tee -a /etc/fstab
# mkdir -m 000 /data
# mount /data
# df -h
# mount
You now have a 10 GB (or whatever size you specified) EBS volume mounted under /data with an XFS file system, and it will be automatically mounted if the instance reboots.