Linux Info
mdadm
|
How to grow Raid6 and XFS — 567.dk
How to grow Raid6 and XFS
This is my personal guide to grow my XFS filesystem. - This filesystem is located on a software raid6, where I add a 3TB disk more.
!!! REMEMBER to backup critical data before trying this !!!
In this example I grow /dev/md0 from 10 drives to 11 drives, by adding /dev/sdl1. (spare disk) Later I GPT format and add a new spare disk /dev/sdm1
My recomendations is to use raid6, on drives larger than 146 gb - This will say every time
My recomendations is to use mdadm - v3.2.2 or newer
- First my software version
[root@s02 ~]# uname -a
Linux s02.567.dk 2.6.32-71.29.1.el6.x86_64 #1 SMP Mon Jun 27 19:49:27 BST 2011 x86_64 x86_64 x86_64 GNU/Linux
[root@s02 ~]# xfs_info -V
xfs_info version 3.1.1
[root@s02 ~]# mdadm -V
mdadm - v3.2.2 - 17th June 2011
[root@s02 ~]# parted -v
parted (GNU parted) 2.1
[root@s02 ~]# uptime
22:32:57 up 213 days, 20:26, 1 user, load average: 1.19, 1.07, 1.05
- Now veryfying the software raid is clean.
[root@s02 ~]# mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Sat Jun 23 01:00:13 2012
Raid Level : raid6
Array Size : 23442123104 (22356.15 GiB 24004.73 GB)
Used Dev Size : 2930265388 (2794.52 GiB 3000.59 GB)
Raid Devices : 10
Total Devices : 11
Persistence : Superblock is persistent
Update Time : Thu Jul 5 22:01:54 2012
State : clean
Active Devices : 10
Working Devices : 11
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 4K
Name : s02.567.dk:0 (local to host s02.567.dk)
UUID : b7b5051a:f925da84:00125dce:f7addf53
Events : 19
Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 33 1 active sync /dev/sdc1
2 8 49 2 active sync /dev/sdd1
3 8 65 3 active sync /dev/sde1
4 8 81 4 active sync /dev/sdf1
5 8 97 5 active sync /dev/sdg1
6 8 113 6 active sync /dev/sdh1
7 8 129 7 active sync /dev/sdi1
8 8 145 8 active sync /dev/sdj1
9 8 161 9 active sync /dev/sdk1
10 8 177 - spare /dev/sdl1
[root@s02 ~]#
- Grow the raid by using the spare disk - if you dont have a spare, I show how to add it later
[root@s02 ~]# mdadm --grow /dev/md0 --raid-devices=11
mdadm: Need to backup 288K of critical section..
[root@s02 ~]#
- Now veryfying it is reshapeing - and progress.
[root@s02 ~]# mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Sat Jun 23 01:00:13 2012
Raid Level : raid6
Array Size : 23442123104 (22356.15 GiB 24004.73 GB)
Used Dev Size : 2930265388 (2794.52 GiB 3000.59 GB)
Raid Devices : 11
Total Devices : 11
Persistence : Superblock is persistent
Update Time : Thu Jul 5 22:19:44 2012
State : clean, reshaping
Active Devices : 11
Working Devices : 11
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 4K
Reshape Status : 0% complete
Delta Devices : 1, (10->11)
Name : s02.567.dk:0 (local to host s02.567.dk)
UUID : b7b5051a:f925da84:00125dce:f7addf53
Events : 99
Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 33 1 active sync /dev/sdc1
2 8 49 2 active sync /dev/sdd1
3 8 65 3 active sync /dev/sde1
4 8 81 4 active sync /dev/sdf1
5 8 97 5 active sync /dev/sdg1
6 8 113 6 active sync /dev/sdh1
7 8 129 7 active sync /dev/sdi1
8 8 145 8 active sync /dev/sdj1
9 8 161 9 active sync /dev/sdk1
10 8 177 10 active sync /dev/sdl1
[root@s02 ~]#
- See disk IO - If you cant see the disk physical.
If not installed - yum install sysstat
look at rsec/s and wsec/s -- Read on the added drive is low, due to there fact it write the data to this disk from the other disks.
[root@s02 ~]# iostat -h -x
Linux 2.6.32-71.29.1.el6.x86_64 (s02.567.dk) 07/05/2012 _x86_64_ (2 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
0.84 0.00 0.92 0.47 0.00 97.77
Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
sda 0.55 1.77 1.02 0.66 55.75 18.62 44.35 0.02 10.41 5.95 1.00
sdb 50.94 45.33 1.63 1.36 420.11 373.25 265.75 0.16 54.30 19.23 5.74
sdc 50.95 45.33 1.62 1.35 420.13 373.25 267.37 0.16 54.98 19.37 5.75
sdd 50.98 45.42 1.58 1.26 420.12 373.17 279.18 0.17 59.07 20.29 5.77
sde 50.98 45.44 1.58 1.24 420.11 373.23 281.76 0.17 60.41 20.59 5.80
sdf 50.38 45.06 2.18 1.62 420.29 373.17 209.05 0.13 34.04 13.71 5.20
sdg 50.42 45.07 2.14 1.60 420.28 373.17 212.12 0.13 35.94 14.22 5.32
sdh 50.44 45.09 2.13 1.58 420.29 373.17 213.88 0.14 37.48 14.45 5.36
sdi 50.39 45.06 2.17 1.62 420.30 373.17 209.44 0.13 35.47 14.03 5.32
sdj 50.41 45.06 2.15 1.62 420.28 373.18 210.87 0.14 36.19 14.26 5.37
sdk 50.43 45.09 2.13 1.60 420.26 373.25 213.01 0.14 37.67 14.52 5.41
sdl 0.02 43.97 0.05 2.69 0.37 373.23 136.07 0.06 21.06 7.45 2.05
sdm 0.23 0.00 0.07 0.00 0.59 0.00 7.90 0.00 1.56 1.56 0.01
sdn 0.23 0.00 0.08 0.00 0.59 0.00 7.83 0.00 1.54 1.53 0.01
dm-0 0.00 0.00 1.23 2.33 53.39 18.62 20.21 0.16 44.92 2.73 0.97
dm-1 0.00 0.00 0.06 0.00 0.49 0.00 8.00 0.00 5.22 2.20 0.01
md0 0.00 0.00 0.27 0.11 0.55 0.85 3.67 0.00 0.00 0.00 0.00
dm-2 0.00 0.00 0.07 0.00 0.55 0.00 7.93 0.00 3.89 0.90 0.01
[root@s02 ~]#
- Now wait to it is reshaped.
This can take long time if it is large (and or) busy drives. - 11 disk of 3TB wait 1 week !!! -- 1% 2 hours
[root@s02 ~]# mdadm --detail /dev/md0 | grep Reshape
Reshape Status : 1% complete
[root@s02 ~]#
- Verify partions size.
Remember the size.
[root@s02 ~]# df -h | grep /dev/md0
/dev/md0 22T 2.1G 22T 1% /mnt/md0
[root@s02 ~]#
- Now to extend the xfs partion.
[root@s02 ~]# xfs_growfs /dev/md0
- Verify that the partion is extended.
Remember partions that is more than 80% used slows down in performance.
[root@s02 ~]# df -h | grep /dev/md0
/dev/md0 22T 16T 6.5T 71% /mnt/md0
- GPT formating the new disk -- Here the next spare disk
[root@s02 ~]# parted /dev/sdn
GNU Parted 2.1
Using /dev/sdn
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) mklabel gpt
(parted) mkpart primary 0.00TB 3.00TB
(parted) print
Model: ATA WDC WD30EZRX-00M (scsi)
Disk /dev/sdn: 3001GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Number Start End Size File system Name Flags
1 1049kB 3001GB 3001GB primary
(parted) quit
Information: You may need to update /etc/fstab.
[root@s02 ~]#
- Add a spare to raid (Here /dev/sdm1)
[root@s02 ~]# mdadm --add /dev/md0 /dev/sdm1
[root@s02 ~]#
- Verify that it is spare
[root@s02 ~]# mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Sat Jun 23 01:00:13 2012
Raid Level : raid6
Array Size : xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Used Dev Size : 2930265388 (2794.52 GiB 3000.59 GB)
Raid Devices : 10
Total Devices : 11
Persistence : Superblock is persistent
Update Time : Thu Jul 5 22:01:54 2012
State : clean
Active Devices : 10
Working Devices : 11
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 4K
Name : s02.567.dk:0 (local to host s02.567.dk)
UUID : b7b5051a:f925da84:00125dce:f7addf53
Events : 19
Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 33 1 active sync /dev/sdc1
2 8 49 2 active sync /dev/sdd1
3 8 65 3 active sync /dev/sde1
4 8 81 4 active sync /dev/sdf1
5 8 97 5 active sync /dev/sdg1
6 8 113 6 active sync /dev/sdh1
7 8 129 7 active sync /dev/sdi1
8 8 145 8 active sync /dev/sdj1
9 8 161 9 active sync /dev/sdk1
10 8 177 10 active sync /dev/sdl1
11 8 177 - spare /dev/sdm1
|