RHEL6을 실행하는 XEN 게스트가 있고 Dom0에서 제공되는 LUN이 있습니다. 여기에는 vg_ALHINT라는 LVM 볼륨 그룹이 포함되어 있습니다(INT는 통합을 나타내고 ALH는 Oracle 데이터베이스 이름의 약어입니다). 데이터는 Oracle 11g입니다. VG를 가져오고 활성화하고 udev에서 각 논리 볼륨에 대한 매핑을 생성했습니다.
그러나 장치 매퍼는 논리 볼륨[LV] 중 하나에 대한 매핑을 생성하지 않았으며 관련 LV에 대해 나머지 LV와 다른 메이저 및 마이너 번호를 사용하여 /dev/dm-2를 생성했습니다.
# dmsetup table
vg_ALHINT-arch: 0 4300800 linear 202:16 46139392
vg0-lv6: 0 20971520 linear 202:2 30869504
vg_ALHINT-safeset2: 0 4194304 linear 202:16 35653632
vg0-lv5: 0 2097152 linear 202:2 28772352
vg_ALHINT-safeset1: 0 4186112 linear 202:16 54528000
vg0-lv4: 0 524288 linear 202:2 28248064
vg0-lv3: 0 4194304 linear 202:2 24053760
vg_ALHINT-oradata: **
vg0-lv2: 0 4194304 linear 202:2 19859456
vg0-lv1: 0 2097152 linear 202:2 17762304
vg0-lv0: 0 17760256 linear 202:2 2048
vg_ALHINT-admin: 0 4194304 linear 202:16 41945088
** 위의 vg_ALHINT-oradata가 비어 있는 것을 볼 수 있습니다.
# ls -l /dev/mapper/
total 0
[root@iui-alhdb01 ~]# ls -l /dev/mapper/
total 0
crw-rw---- 1 root root 10, 58 Apr 3 13:43 control
lrwxrwxrwx 1 root root 7 Apr 3 13:43 vg0-lv0 -> ../dm-0
lrwxrwxrwx 1 root root 7 Apr 3 13:43 vg0-lv1 -> ../dm-1
lrwxrwxrwx 1 root root 7 Apr 3 14:35 vg0-lv2 -> ../dm-2
lrwxrwxrwx 1 root root 7 Apr 3 13:43 vg0-lv3 -> ../dm-3
lrwxrwxrwx 1 root root 7 Apr 3 13:43 vg0-lv4 -> ../dm-4
lrwxrwxrwx 1 root root 7 Apr 3 13:43 vg0-lv5 -> ../dm-5
lrwxrwxrwx 1 root root 7 Apr 3 13:43 vg0-lv6 -> ../dm-6
lrwxrwxrwx 1 root root 7 Apr 3 13:59 vg_ALHINT-admin -> ../dm-8
lrwxrwxrwx 1 root root 7 Apr 3 13:59 vg_ALHINT-arch -> ../dm-9
brw-rw---- 1 root disk 253, 7 Apr 3 14:37 vg_ALHINT-oradata
lrwxrwxrwx 1 root root 8 Apr 3 13:59 vg_ALHINT-safeset1 -> ../dm-10
lrwxrwxrwx 1 root root 8 Apr 3 13:59 vg_ALHINT-safeset2 -> ../dm-11
vg_ALHINT-oradata는 실행될 때까지 생성되지 않습니다.dmsetup mknodes
# cat /proc/partitions
major minor #blocks name
202 0 26214400 xvda
202 1 262144 xvda1
202 2 25951232 xvda2
253 0 8880128 dm-0
253 1 1048576 dm-1
253 2 2097152 dm-2
253 3 2097152 dm-3
253 4 262144 dm-4
253 5 1048576 dm-5
253 6 10485760 dm-6
202 16 29360128 xvdb
253 8 2097152 dm-8
253 9 2150400 dm-9
253 10 2093056 dm-10
253 11 2097152 dm-11
dm-7은 원래 vg_ALHINT-oradata였지만 누락되었습니다. 나는 그것을 실행 dmsetup mknodes
하고 dm-7
생성되었지만 여전히 누락되었습니다 /proc/paritions
.
# ls -l /dev/dm-7
brw-rw---- 1 root disk 253, 7 Apr 3 13:59 /dev/dm-7
메이저 및 마이너 번호는 253:7
장치이며 VG의 동일한 LV에는 202:nn이 있습니다.
lvs
이 LV가 정지되었다고 알려주세요:
# lvs
Logging initialised at Thu Apr 3 14:44:19 2014
Set umask from 0022 to 0077
Finding all logical volumes
LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert
lv0 vg0 -wi-ao---- 8.47g
lv1 vg0 -wi-ao---- 1.00g
lv2 vg0 -wi-ao---- 2.00g
lv3 vg0 -wi-ao---- 2.00g
lv4 vg0 -wi-ao---- 256.00m
lv5 vg0 -wi-ao---- 1.00g
lv6 vg0 -wi-ao---- 10.00g
admin vg_ALHINT -wi-a----- 2.00g
arch vg_ALHINT -wi-a----- 2.05g
oradata vg_ALHINT -wi-s----- 39.95g
safeset1 vg_ALHINT -wi-a----- 2.00g
safeset2 vg_ALHINT -wi-a----- 2.00g
Wiping internal VG cache
이 디스크는 프로덕션 데이터베이스의 스냅샷에서 생성됩니다. 스냅샷 전에 Oracle이 종료되고 VG가 내보내졌습니다. 매주 스크립트를 통해 수백 개의 데이터베이스에 대해 동일한 작업을 수행한다는 점에 유의해야 합니다. 이것은 스냅샷이었기 때문에 원시 장치 매퍼에서 테이블을 가져와 이를 사용하여 누락된 테이블을 다시 생성하려고 했습니다.
0 35651584 linear 202:16 2048
35651584 4087808 linear 202:16 50440192
39739392 2097152 linear 202:16 39847936
41836544 41943040 linear 202:16 58714112
장치를 일시 중지한 후 dmsetup suspend /dev/dm-7
실행했습니다.dmsetup load /dev/dm-7 $table.txt
다음으로 기기를 복원하려고 했는데,
# dmsetup resume /dev/dm-7
device-mapper: resume ioctl on vg_ALHINT-oradata failed: Invalid argument
Command failed
#
내가 정말로 길을 잃었을 때 어떤 아이디어라도. (예, 여러번 재부팅하고 스냅샷을 다시 찍었지만 항상 같은 문제가 발생합니다. 심지어 서버를 다시 설치했는데도 실행 중입니다 yum update
.)
// 편집하다.
이것이 프로덕션 환경의 원본 dmsetup 테이블이라는 점을 추가하는 것을 잊어버렸으며 위에서 언급한 것처럼 oradata 레이아웃을 통합 서버에 로드하려고 했습니다.
# dmsetup table
vg_ALHPRD-safeset2: 0 4194304 linear 202:32 35653632
vg_ALHPRD-safeset1: 0 4186112 linear 202:32 54528000
vg_ALHPRD-oradata: 0 35651584 linear 202:32 2048
vg_ALHPRD-oradata: 35651584 4087808 linear 202:32 50440192
vg_ALHPRD-oradata: 39739392 2097152 linear 202:32 39847936
vg_ALHPRD-oradata: 41836544 41943040 linear 202:32 58714112
vg_ALHPRD-admin: 0 4194304 linear 202:32 41945088
//편집하다
vgscan --mknodes를 실행하고 다음을 얻습니다.
The link /dev/vg_ALHINT/oradata should have been created by udev but it was not found. Falling back to direct link creation.
# ls -l /dev/vg_ALHINT/oradata
lrwxrwxrwx 1 root root 29 Apr 3 14:50 /dev/vg_ALHINT/oradata -> /dev/mapper/vg_ALHINT-oradata
여전히 활성화할 수 없으며 다음과 같은 오류 메시지가 나타납니다.
device-mapper: resume ioctl on failed: Invalid argument Unable to resume vg_ALHINT-oradata (253:7)
//편집하다
/var/log/messages에 스택 추적이 표시됩니다.
Apr 3 13:58:09 iui-alhdb01 kernel: blkfront: xvdb: barriers disabled
Apr 3 13:58:09 iui-alhdb01 kernel: xvdb: unknown partition table
Apr 3 13:59:35 iui-alhdb01 kernel: device-mapper: table: 253:7: xvdb too small for target: start=58714112, len=41943040, dev_size=58720256
Apr 3 14:02:31 iui-alhdb01 ntpd[1093]: 0.0.0.0 c612 02 freq_set kernel 5.242 PPM
Apr 3 14:02:31 iui-alhdb01 ntpd[1093]: 0.0.0.0 c615 05 clock_sync
Apr 3 14:30:13 iui-alhdb01 kernel: device-mapper: table: 253:2: xvdb too small for target: start=58714112, len=41943040, dev_size=58720256
Apr 3 14:33:34 iui-alhdb01 kernel: INFO: task vi:1394 blocked for more than 120 seconds.
Apr 3 14:33:34 iui-alhdb01 kernel: Not tainted 2.6.32-431.11.2.el6.x86_64 #1
Apr 3 14:33:34 iui-alhdb01 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Apr 3 14:33:34 iui-alhdb01 kernel: vi D 0000000000000000 0 1394 1271 0x00000084
Apr 3 14:33:34 iui-alhdb01 kernel: ffff88007aef19b8 0000000000000082 ffff88007aef1978 ffffffffa000443c
Apr 3 14:33:34 iui-alhdb01 kernel: ffff88007d208d80 ffff880037cabc08 ffff880037cda0c8 ffff8800022168a8
Apr 3 14:33:34 iui-alhdb01 kernel: ffff880037da45f8 ffff88007aef1fd8 000000000000fbc8 ffff880037da45f8
Apr 3 14:33:34 iui-alhdb01 kernel: Call Trace:
Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffffa000443c>] ? dm_table_unplug_all+0x5c/0x100 [dm_mod]
Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffff810a7091>] ? ktime_get_ts+0xb1/0xf0
Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffff811bf1f0>] ? sync_buffer+0x0/0x50
Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffff815286c3>] io_schedule+0x73/0xc0
Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffff811bf230>] sync_buffer+0x40/0x50
Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffff8152918f>] __wait_on_bit+0x5f/0x90
Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffff811bf1f0>] ? sync_buffer+0x0/0x50
Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffff81529238>] out_of_line_wait_on_bit+0x78/0x90
Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffff8109b310>] ? wake_bit_function+0x0/0x50
Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffff811bf1e6>] __wait_on_buffer+0x26/0x30
Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffffa0085875>] __ext4_get_inode_loc+0x1e5/0x3b0 [ext4]
Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffffa0088006>] ext4_iget+0x86/0x7d0 [ext4]
Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffffa008ec35>] ext4_lookup+0xa5/0x140 [ext4]
Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffff81198b05>] do_lookup+0x1a5/0x230
Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffff81198e90>] __link_path_walk+0x200/0xff0
Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffff8114a667>] ? handle_pte_fault+0xf7/0xb00
Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffff811a3c6a>] ? dput+0x9a/0x150
Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffff81199f3a>] path_walk+0x6a/0xe0
Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffff8119a14b>] filename_lookup+0x6b/0xc0
Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffff8119b277>] user_path_at+0x57/0xa0
Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffff8104a98c>] ? __do_page_fault+0x1ec/0x480
Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffff8119707b>] ? putname+0x2b/0x40
Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffff8118eac0>] vfs_fstatat+0x50/0xa0
Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffff811c4645>] ? nr_blockdev_pages+0x15/0x70
Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffff8115c4ad>] ? si_swapinfo+0x1d/0x90
Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffff8118ec3b>] vfs_stat+0x1b/0x20
Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffff8118ec64>] sys_newstat+0x24/0x50
Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffff810e2057>] ? audit_syscall_entry+0x1d7/0x200
Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffff8100b072>] system_call_fastpath+0x16/0x1b
Apr 3 14:35:34 iui-alhdb01 kernel: INFO: task vi:1394 blocked for more than 120 seconds.
Apr 3 14:35:34 iui-alhdb01 kernel: Not tainted 2.6.32-431.11.2.el6.x86_64 #1
Apr 3 14:35:34 iui-alhdb01 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Apr 3 14:35:34 iui-alhdb01 kernel: vi D 0000000000000000 0 1394 1271 0x00000084
Apr 3 14:35:34 iui-alhdb01 kernel: ffff88007aef19b8 0000000000000082 ffff88007aef1978 ffffffffa000443c
Apr 3 14:35:34 iui-alhdb01 kernel: ffff88007d208d80 ffff880037cabc08 ffff880037cda0c8 ffff8800022168a8
Apr 3 14:35:34 iui-alhdb01 kernel: ffff880037da45f8 ffff88007aef1fd8 000000000000fbc8 ffff880037da45f8
Apr 3 14:35:34 iui-alhdb01 kernel: Call Trace:
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffffa000443c>] ? dm_table_unplug_all+0x5c/0x100 [dm_mod]
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff810a7091>] ? ktime_get_ts+0xb1/0xf0
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff811bf1f0>] ? sync_buffer+0x0/0x50
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff815286c3>] io_schedule+0x73/0xc0
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff811bf230>] sync_buffer+0x40/0x50
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff8152918f>] __wait_on_bit+0x5f/0x90
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff811bf1f0>] ? sync_buffer+0x0/0x50
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff81529238>] out_of_line_wait_on_bit+0x78/0x90
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff8109b310>] ? wake_bit_function+0x0/0x50
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff811bf1e6>] __wait_on_buffer+0x26/0x30
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffffa0085875>] __ext4_get_inode_loc+0x1e5/0x3b0 [ext4]
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffffa0088006>] ext4_iget+0x86/0x7d0 [ext4]
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffffa008ec35>] ext4_lookup+0xa5/0x140 [ext4]
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff81198b05>] do_lookup+0x1a5/0x230
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff81198e90>] __link_path_walk+0x200/0xff0
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff8114a667>] ? handle_pte_fault+0xf7/0xb00
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff811a3c6a>] ? dput+0x9a/0x150
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff81199f3a>] path_walk+0x6a/0xe0
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff8119a14b>] filename_lookup+0x6b/0xc0
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff8119b277>] user_path_at+0x57/0xa0
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff8104a98c>] ? __do_page_fault+0x1ec/0x480
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff8119707b>] ? putname+0x2b/0x40
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff8118eac0>] vfs_fstatat+0x50/0xa0
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff811c4645>] ? nr_blockdev_pages+0x15/0x70
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff8115c4ad>] ? si_swapinfo+0x1d/0x90
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff8118ec3b>] vfs_stat+0x1b/0x20
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff8118ec64>] sys_newstat+0x24/0x50
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff810e2057>] ? audit_syscall_entry+0x1d7/0x200
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff8100b072>] system_call_fastpath+0x16/0x1b
Apr 3 14:35:34 iui-alhdb01 kernel: INFO: task vgdisplay:1437 blocked for more than 120 seconds.
Apr 3 14:35:34 iui-alhdb01 kernel: Not tainted 2.6.32-431.11.2.el6.x86_64 #1
Apr 3 14:35:34 iui-alhdb01 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Apr 3 14:35:34 iui-alhdb01 kernel: vgdisplay D 0000000000000000 0 1437 1423 0x00000080
Apr 3 14:35:34 iui-alhdb01 kernel: ffff88007da35a18 0000000000000086 ffff88007da359d8 ffffffffa000443c
Apr 3 14:35:34 iui-alhdb01 kernel: 000000000007fff0 0000000000010000 ffff88007da359d8 ffff88007d24d380
Apr 3 14:35:34 iui-alhdb01 kernel: ffff880037c8c5f8 ffff88007da35fd8 000000000000fbc8 ffff880037c8c5f8
Apr 3 14:35:34 iui-alhdb01 kernel: Call Trace:
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffffa000443c>] ? dm_table_unplug_all+0x5c/0x100 [dm_mod]
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff810a7091>] ? ktime_get_ts+0xb1/0xf0
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff815286c3>] io_schedule+0x73/0xc0
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff811c8a9d>] __blockdev_direct_IO_newtrunc+0xb7d/0x1270
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff811c4400>] ? blkdev_get_block+0x0/0x20
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff811c9207>] __blockdev_direct_IO+0x77/0xe0
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff811c4400>] ? blkdev_get_block+0x0/0x20
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff811c5487>] blkdev_direct_IO+0x57/0x60
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff811c4400>] ? blkdev_get_block+0x0/0x20
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff811217bb>] generic_file_aio_read+0x6bb/0x700
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff811c5fd0>] ? blkdev_get+0x10/0x20
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff811c5fe0>] ? blkdev_open+0x0/0xc0
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff8118617f>] ? __dentry_open+0x23f/0x360
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff811c4841>] blkdev_aio_read+0x51/0x80
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff81188e8a>] do_sync_read+0xfa/0x140
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff810ec3f6>] ? rcu_process_dyntick+0xd6/0x120
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff8109b290>] ? autoremove_wake_function+0x0/0x40
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff811c479c>] ? block_ioctl+0x3c/0x40
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff8119dc12>] ? vfs_ioctl+0x22/0xa0
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff8119ddb4>] ? do_vfs_ioctl+0x84/0x580
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff81226496>] ? security_file_permission+0x16/0x20
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff81189775>] vfs_read+0xb5/0x1a0
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff811898b1>] sys_read+0x51/0x90
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff810e1e4e>] ? __audit_syscall_exit+0x25e/0x290
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff8100b072>] system_call_fastpath+0x16/0x1b
Apr 3 14:39:19 iui-alhdb01 kernel: device-mapper: table: 253:7: xvdb too small for target: start=58714112, len=41943040, dev_size=58720256
Apr 3 14:53:57 iui-alhdb01 kernel: device-mapper: table: 253:7: xvdb too small for target: start=58714112, len=41943040, dev_size=58720256
Apr 3 15:02:42 iui-alhdb01 yum[1544]: Installed: sos-2.2-47.el6.noarch
Apr 3 15:52:29 iui-alhdb01 kernel: device-mapper: table: 253:7: xvdb too small for target: start=58714112, len=41943040, dev_size=58720256
Apr 3 15:59:08 iui-alhdb01 kernel: device-mapper: table: 253:7: xvdb too small for target: start=58714112, len=41943040, dev_size=58720256
답변1
devices.txt
커널 문서에 나와 있습니다. Major 202는 "Xen Virtual Block Device"이고 Major 253은 LVM/장치 매퍼입니다.
귀하의 모든 dm-x
장치는 253:n
단지보다도착하다 202:n
.
오류 메시지는 명확합니다.
device-mapper: table: 253:7: xvdb too small for target: start=58714112, len=41943040, dev_size=58720256
XEN 장치에 변화가 있는 것 같습니다. 존재하지 않는 저장소 vg_ALHPRD-oradata
에 액세스하려고 했기 때문에 파일을 로드할 수 없습니다 .202:16
답변2
하이퍼바이저의 다중 경로가 LUN 크기 매핑 업데이트를 거부하는 것 같습니다.
이 LUN은 원래 28Gb였지만 나중에 스토리지 어레이에서 48Gb로 늘어났습니다.
VG 정보에서는 48G라고 생각합니다. 실제로 이 디스크는 48G이지만 다중 경로가 업데이트되지 않으므로 여전히 28G입니다.
28G 다중 경로 준수:
# multipath -l 350002acf962421ba
350002acf962421ba dm-17 3PARdata,VV
size=28G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
|- 8:0:0:22 sdt 65:48 active undef running
|- 10:0:0:22 sdbh 67:176 active undef running
|- 7:0:0:22 sddq 71:128 active undef running
|- 9:0:0:22 sdfb 129:208 active undef running
|- 8:0:1:22 sdmz 70:432 active undef running
|- 7:0:1:22 sdoj 128:496 active undef running
|- 10:0:1:22 sdop 129:336 active undef running
|- 9:0:1:22 sdqm 132:352 active undef running
|- 7:0:2:22 sdxh 71:624 active undef running
|- 8:0:2:22 sdzy 131:704 active undef running
|- 10:0:2:22 sdaab 131:752 active undef running
|- 9:0:2:22 sdaed 66:912 active undef running
|- 7:0:3:22 sdakm 132:992 active undef running
|- 10:0:3:22 sdall 134:880 active undef running
|- 8:0:3:22 sdamx 8:1232 active undef running
`- 9:0:3:22 sdaqa 69:1248 active undef running
저장된 실제 디스크 크기:
# showvv ALHIDB_SNP_001
-Rsvd(MB)-- -(MB)-
Id Name Prov Type CopyOf BsId Rd -Detailed_State- Adm Snp Usr VSize
4098 ALHIDB_SNP_001 snp vcopy ALHIDB_SNP_001.ro 5650 RW normal -- -- -- 49152
올바른 디스크가 있는지 확인하려면 다음 단계를 따르세요.
# showvlun -showcols VVName,VV_WWN| grep -i 0002acf962421ba
ALHIDB_SNP_001 50002ACF962421BA
VG는 48G라고 생각합니다
--- Volume group ---
VG Name vg_ALHINT
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 30
VG Access read/write
VG Status exported/resizable
MAX LV 0
Cur LV 5
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 48.00 GiB
PE Size 4.00 MiB
Total PE 12287
Alloc PE / Size 12287 / 48.00 GiB
Free PE / Size 0 / 0
VG UUID qqZ9Vi-5Ob1-R6zb-YeWa-jDfg-9wc7-E2wsem
HBA에서 새 디스크를 다시 스캔하고 다중화를 재구성했을 때 디스크에 여전히 28G가 표시되었기 때문에 이것을 시도했지만 아무것도 변경되지 않았습니다.
# multipathd -k'resize map 350002acf962421ba'
버전:
lvm2-2.02.56-8.100.3.el5
device-mapper-multipath-libs-0.4.9-46.100.5.el5
해결책 솔루션을 생각할 수 없었기 때문에 다음과 같이 했습니다. 이전에 OVM 3.2를 실행한다고 작성한 적이 없으므로 솔루션의 일부에 OVM이 포함됩니다. i) OVM을 통해 Xen에서 게스트를 종료합니다. ii) CD 제거 iii) OVM에서 LUN 제거 iv) 하이퍼바이저에서 존재하지 않는 LUN을 제거합니다. v) OVM은 스토리지를 다시 검색합니다. vi) 30분 동안 기다립니다.) vii) 내 디스크를 다른 LUN ID를 가진 하이퍼바이저에 제공합니다. viii) OVM 재검색 스토리지
이제 마술처럼 48G 디스크가 보입니다.
# multipath -l 350002acf962421ba
350002acf962421ba dm-18 3PARdata,VV
size=48G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
|- 9:0:0:127 sdt 65:48 active undef running
|- 9:0:1:127 sdbh 67:176 active undef running
|- 9:0:2:127 sddo 71:96 active undef running
|- 9:0:3:127 sdfb 129:208 active undef running
|- 10:0:3:127 sdmz 70:432 active undef running
|- 10:0:0:127 sdoh 128:464 active undef running
|- 10:0:1:127 sdop 129:336 active undef running
|- 10:0:2:127 sdqm 132:352 active undef running
|- 7:0:1:127 sdzu 131:640 active undef running
|- 7:0:0:127 sdxh 71:624 active undef running
|- 7:0:3:127 sdaed 66:912 active undef running
|- 7:0:2:127 sdaab 131:752 active undef running
|- 8:0:0:127 sdakm 132:992 active undef running
|- 8:0:1:127 sdall 134:880 active undef running
|- 8:0:2:127 sdamx 8:1232 active undef running
`- 8:0:3:127 sdaqa 69:1248 active undef running