Linux下避免軟Raid自動重組的技巧
有時我們需要把從兩臺不同機(jī)器上輸出的iSCSI硬盤輸出到另一臺機(jī)器,為了保證高可用,我們可以把這兩塊iSCSI硬盤做一個軟Raid。但當(dāng)我們重啟 這臺機(jī)器后,再運(yùn)行iscsi的掛盤的時候,當(dāng)login***個iSCSI Target時,剛把***塊硬盤識別出來后,就發(fā)現(xiàn)軟Raid自動重組出來的,但自動重組出來的MDRaid,因?yàn)榈讓又挥幸粔K硬盤,所以處于降級狀態(tài), 如下所示:
- root@ubuntu01:~# iscsiadm -m node -T iqn.2001-04.com.example:serv01 -l
- Logging in to [iface: default, target: iqn.2001-04.com.example:serv01, portal: 192.168.1.4,3260] (multiple)
- Login to [iface: default, target: iqn.2001-04.com.example:serv01, portal: 192.168.1.4,3260] successful.
- root@ubuntu01:~# cat /proc/mdstat
- Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
- md127 : inactive sdb[0](S)
- 511936 blocks
- unused devices: <none>
- root@ubuntu01:~# iscsiadm -m node -T iqn.2001-04.com.example:serv02 -l
- Logging in to [iface: default, target: iqn.2001-04.com.example:serv02, portal: 192.168.1.5,3260] (multiple)
- Login to [iface: default, target: iqn.2001-04.com.example:serv02, portal: 192.168.1.5,3260] successful.
- root@ubuntu01:~# cat /proc/mdstat
- Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
- md127 : active raid1 sdc[1] sdb[0]
- 511936 blocks [2/2] [UU]
- [===>.................] resync = 18.9% (97024/511936) finish=6.6min speed=1024K/sec
- unused devices: <none>
上面可以看出,當(dāng)我們把***塊iSCSI硬盤識別出來后,軟Raid就重組出來了,當(dāng)我們掛第二個iSCSI硬盤盤,就會出現(xiàn)數(shù)據(jù)同步。所以,我們需要自己重組軟Raid,避免出現(xiàn)的降級情況和數(shù)據(jù)同步的情況。禁止MDRaid自動重組的功能。方法如下:編輯配置文件/etc/default/mdadm:
- AUTOCHECK=false
- START_DAEMON=false
在/etc/mdadm/mdadm.conf文件中增加一行:
- AUTO -all
這時,我們手工重組軟Raid,就不會出現(xiàn)降級和數(shù)據(jù)同步的情況了:
- root@ubuntu01:~# iscsiadm -m discovery -t st -p 192.168.1.4
- 192.168.1.4:3260,1 iqn.2001-04.com.example:serv01
- root@ubuntu01:~# iscsiadm -m discovery -t st -p 192.168.1.5
- 192.168.1.5:3260,1 iqn.2001-04.com.example:serv02
- root@ubuntu01:~# iscsiadm -m node -T iqn.2001-04.com.example:serv01 -l
- Logging in to [iface: default, target: iqn.2001-04.com.example:serv01, portal: 192.168.1.4,3260] (multiple)
- Login to [iface: default, target: iqn.2001-04.com.example:serv01, portal: 192.168.1.4,3260] successful.
- root@ubuntu01:~# cat /proc/mdstat
- Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
- unused devices: <none>
- root@ubuntu01:~# iscsiadm -m node -T iqn.2001-04.com.example:serv02 -l
- Logging in to [iface: default, target: iqn.2001-04.com.example:serv02, portal: 192.168.1.5,3260] (multiple)
- Login to [iface: default, target: iqn.2001-04.com.example:serv02, portal: 192.168.1.5,3260] successful.
- root@ubuntu01:~# cat /proc/mdstat
- Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
- unused devices: <none>
- root@ubuntu01:~# mdadm -A /dev/md/mdtest /dev/sdb /dev/sdc
- mdadm: /dev/md/mdtest has been started with 2 drives.
- root@ubuntu01:~# cat /proc/mdstat
- Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
- md127 : active raid1 sdb[0] sdc[1]
- 511936 blocks [2/2] [UU]
- unused devices: <none>