클러스터에 Rocks 5.5 릴리스가 있고 6.2로 업그레이드하려고 합니다. "/export" 파티션은 2개의 동일한 하드 디스크(HD)(/dev/sdb 및 /dev/sdc)와 나머지 파티션, 즉 "/", "/var"로 구성된 Intel 소프트웨어 RAID 0 어레이에 설치됩니다. 그리고 (부팅) HD 또는 "/dev/sda"의 "스왑".
첫 번째 업그레이드 시도는 하드 드라이브에서 RAID 정보를 찾았으므로 더 이상 진행할 수 없다는 경고와 함께 실패했습니다. 나는 다음을 실행하여 순진하게 문제를 해결했습니다.
dmraid -r -E /dev/sda
오류가 다시 발생하지 않아 업그레이드를 수행할 수 있었습니다. 수동 파티셔닝을 사용하여 부팅 하드 드라이브를 포맷하고 RAID 어레이를 포맷하지 않은 채로 두고 "/export"로 다시 마운트했습니다.
설치가 완료된 후 시작 프로세스가 실패하고 다음과 같이 표시됩니다.
ERROR: ddf1: Cannot find physical drive description on /dev/sdc!
ERROR: ddf1: setting up RAID device /dev/sdc
ERROR: ddf1: Cannot find physical drive description on /dev/sdb!
ERROR: ddf1: setting up RAID device /dev/sdb
/export1: The filesystem size (according to the superblock) is 488378000 blocks
The physical size of the device is 244190638 blocks
Either the superblock or partition table is likely to be corrupt!
Rocks를 "Rescue" 모드로 부팅하면 RAID 파티션이 완전히 마운트 해제되지 않았다고 나와 있지만 드라이브를 다시 마운트할 수 있었습니다.
"dmraid"는 RAID 배열을 표시하지만 오류가 있습니다.
$ dmraid -r
ERROR: ddf1: Cannot find physical drive description on /dev/sdc!
ERROR: ddf1: setting up RAID device /dev/sdc
ERROR: ddf1: Cannot find physical drive description on /dev/sdb!
ERROR: ddf1: setting up RAID device /dev/sdb
/dev/sdc: isw, "isw_eecceiche", GROUP, ok, 1953525166 sectors, data@ 0
/dev/sdb: isw, "isw_eecceiche", GROUP, ok, 1953525166 sectors, data@ 0
$ dmraid -s
ERROR: ddf1: Cannot find physical drive description on /dev/sdc!
ERROR: ddf1: setting up RAID device /dev/sdc
ERROR: ddf1: Cannot find physical drive description on /dev/sdb!
ERROR: ddf1: setting up RAID device /dev/sdb
*** Group superset isw_eecceiche
--> Active Subset
name : isw_eecceiche_Volume0
size : 3907038720
stride : 256
type : stripe
status : ok
subsets: 0
devs : 2
spares : 0
이것은 "fstab"입니다:
#
# /etc/fstab
# Created by anaconda on Tue Sep 15 17:35:11 2015
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=90471019-650c-4901-a8f1-e8cce3fbc059 / ext4 defaults 1 1
UUID=5dae925e-6e01-4442-8f5b-07bfbde7ff09 /export ext2 defaults 1 2
UUID=18303228-189f-4fa3-9661-71786323d70d /var ext4 defaults 1 2
UUID=d14c42ec-e41a-4dbd-b6b8-60afb4aa1b14 swap swap defaults 0 0
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0
# The ram-backed filesystem for ganglia RRD graph databases.
tmpfs /var/lib/ganglia/rrds tmpfs size=2045589000,gid=nobody,uid=nobody,defaults 1 0
그리고 "blkid":
/dev/loop0: TYPE="squashfs"
/dev/sda1: UUID="90471019-650c-4901-a8f1-e8cce3fbc059" TYPE="ext4"
/dev/sda2: UUID="18303228-189f-4fa3-9661-71786323d70d" TYPE="ext4"
/dev/sda3: UUID="d14c42ec-e41a-4dbd-b6b8-60afb4aa1b14" TYPE="swap"
/dev/sdb: UUID="M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?" TYPE="ddf_raid_member"
/dev/sdc: UUID="M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?" TYPE="ddf_raid_member"
/dev/sde1: LABEL="Expansion Drive" UUID="BC448C59448C1872" TYPE="ntfs"
/dev/sdd1: UUID="66F2-41D7" TYPE="vfat"
/dev/mapper/isw_eecceiche_Volume0p1: LABEL="/export1" UUID="5dae925e-6e01-4442-8f5b-07bfbde7ff09" TYPE="ext2"
또한 "fdisk" 출력:
Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x44f45cd4
Device Boot Start End Blocks Id System
/dev/sda1 * 1 111403 894841856 83 Linux
/dev/sda2 111403 119562 65536000 83 Linux
/dev/sda3 119562 121602 16382976 82 Linux swap / Solaris
Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00045387
Device Boot Start End Blocks Id System
/dev/sdb1 * 1 243201 1953512001 83 Linux
Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes
6 heads, 1 sectors/track, 325587528 cylinders
Units = cylinders of 6 * 512 = 3072 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
This doesn't look like a partition table
Probably you selected the wrong device.
Device Boot Start End Blocks Id System
/dev/sdc1 22094 22341 743+ cf Unknown
/dev/sdc2 ? 1 1 0 0 Empty
Partition 2 does not end on cylinder boundary.
/dev/sdc3 357936035 357936283 743+ cf Unknown
/dev/sdc4 1 1 0 0 Empty
Partition 4 does not end on cylinder boundary.
Disk /dev/mapper/isw_eecceiche_Volume0: 2000.4 GB, 2000403824640 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 131072 bytes / 262144 bytes
Disk identifier: 0x00045387
Device Boot Start End Blocks Id System
/dev/mapper/isw_eecceiche_Volume0p1 * 1 243201 1953512001 83 Linux
Partition 1 does not start on physical sector boundary.
Disk /dev/mapper/isw_eecceiche_Volume0p1: 2000.4 GB, 2000396289024 bytes
255 heads, 63 sectors/track, 243200 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 131072 bytes / 262144 bytes
Alignment offset: 98816 bytes
Disk identifier: 0x00000000
"/export" 파티션에 액세스할 수 있으며 해당 데이터는 "Rescue" 모드에서 계속 사용할 수 있습니다.
어레이를 포맷하거나 삭제/재구축하지 않고 RAID 메타데이터를 다시 재구축할 수 있는 방법이 있는지 알고 싶습니다.
이 문제를 해결하는 데 도움을 주시면 대단히 감사하겠습니다.