en google buscar: replace disk mdadm
Primer link
http://www.howtoforge.com/replacing_hard_disks_in_a_raid1_array
Instalacion raid
Despues de instalr el server con raid se verifica el estado de los discos:
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdb1[1] sda1[0]
104320 blocks [2/2] [UU]
md1 : active raid1 sdb2[1] sda2[0]
20860288 blocks [2/2] [UU]
unused devices:
[root@localhost ~]#
Maracar el sdb1 como fallido
[root@localhost ~]# mdadm --manage /dev/md0 --fail /dev/sdb1
mdadm: set /dev/sdb1 faulty in /dev/md0
Verificar:
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdb1[2](F) sda1[0]
104320 blocks [2/1] [U_]
md1 : active raid1 sdb2[1] sda2[0]
20860288 blocks [2/2] [UU]
unused devices:
[root@localhost ~]#
Remover el sdb1 del md0
[root@localhost ~]# mdadm --manage /dev/md0 --remove /dev/sdb1
mdadm: hot removed /dev/sdb1
You have mail in /var/spool/mail/root
[root@localhost ~]#
Verificar:
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sda1[0]
104320 blocks [2/1] [U_]
md1 : active raid1 sdb2[1] sda2[0]
20860288 blocks [2/2] [UU]
unused devices:
[root@localhost ~]#
Repetir los mismos pasos para sdb2 (marcar como fallido y remover)
Marcar como fallido
[root@localhost ~]# mdadm --manage /dev/md1 --fail /dev/sdb2
mdadm: set /dev/sdb2 faulty in /dev/md1
Verificar:
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sda1[0]
104320 blocks [2/1] [U_]
md1 : active raid1 sdb2[2](F) sda2[0]
20860288 blocks [2/1] [U_]
unused devices:
[root@localhost ~]#
Remover sdb2 de md1
[root@localhost ~]# mdadm --manage /dev/md1 --remove /dev/sdb2
mdadm: hot removed /dev/sdb2
Verificar
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sda1[0]
104320 blocks [2/1] [U_]
md1 : active raid1 sda2[0]
20860288 blocks [2/1] [U_]
unused devices:
[root@localhost ~]#
En este punto se apaga la maquina. Se reemplaza fisicamente el disco y se vuelve a prender la maquina
Agregar el nuevo disco para que el raid lo tome en cuenta:
Lo primero es crear las mismas particiones del disco operativo:
[root@localhost ~]# sfdisk -d /dev/sda | sfdisk /dev/sdb
Checking that no-one is using this disk right now ...
OK
Disk /dev/sdb: 2610 cylinders, 255 heads, 63 sectors/track
Old situation:
Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0
Device Boot Start End #cyls #blocks Id System
/dev/sdb1 * 0+ 12 13- 104391 fd Linux raid autodetect
/dev/sdb2 13 2609 2597 20860402+ fd Linux raid autodetect
/dev/sdb3 0 - 0 0 0 Empty
/dev/sdb4 0 - 0 0 0 Empty
New situation:
Units = sectors of 512 bytes, counting from 0
Device Boot Start End #sectors Id System
/dev/sdb1 * 63 208844 208782 fd Linux raid autodetect
/dev/sdb2 208845 41929649 41720805 fd Linux raid autodetect
/dev/sdb3 0 - 0 0 Empty
/dev/sdb4 0 - 0 0 Empty
Successfully wrote the new partition table
Re-reading the partition table ...
If you created or changed a DOS partition, /dev/foo7, say, then use dd(1)
to zero the first 512 bytes: dd if=/dev/zero of=/dev/foo7 bs=512 count=1
(See fdisk(8).)
You have new mail in /var/spool/mail/root
[root@localhost ~]#
[root@localhost ~]# fdisk -l
Disk /dev/sda: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 fd Linux raid autodetect
/dev/sda2 14 2610 20860402+ fd Linux raid autodetect
Disk /dev/sdb: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdb1 * 1 13 104391 fd Linux raid autodetect
/dev/sdb2 14 2610 20860402+ fd Linux raid autodetect
Disk /dev/md1: 21.3 GB, 21360934912 bytes
2 heads, 4 sectors/track, 5215072 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk /dev/md1 doesn't contain a valid partition table
Disk /dev/md0: 106 MB, 106823680 bytes
2 heads, 4 sectors/track, 26080 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk /dev/md0 doesn't contain a valid partition table
[root@localhost ~]#
Agregar /dev/sdb1 a /dev/md0 y /dev/sdb2 a /dev/md1
[root@localhost ~]# mdadm --manage /dev/md0 --add /dev/sdb1
mdadm: re-added /dev/sdb1
[root@localhost ~]#
[root@localhost ~]# mdadm --manage /dev/md1 --add /dev/sdb2
mdadm: re-added /dev/sdb2
[root@localhost ~]#
Despues de agregar los discos automaticamente se comienzan a sincronizar. Se puede verificar:
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdb1[1] sda1[0]
104320 blocks [2/2] [UU]
md1 : active raid1 sdb2[2] sda2[0]
20860288 blocks [2/1] [U_]
[=>...................] recovery = 5.2% (1098752/20860288) finish=6.8min speed=47771K/sec
unused devices:
[root@localhost ~]#
Muchas gracias por la informacion. Le hago una consulta. Si el disco que pertenece al raid y de hecho el RAID mismo esta en un volumen logico. Como se procede? de la misma manera o hay que trabajar tambien sobre volumenes? .
ResponderEliminarSlds y gracias!!