오래된 디스크 데이터를 복사하는 방법은 무엇입니까?

오래된 디스크 데이터를 복사하는 방법은 무엇입니까?

서버가 있고 기존 CentOS 시스템이 고장난 후 새 디스크를 새 CentOS 시스템으로 사용했습니다.

이제 새 디스크에서 내 서버와 새 시스템을 사용할 수 있습니다. 하지만 이전 디스크에서 데이터를 복사하고 싶습니다.

내 새 디스크는 이고 sdb기존 디스크는 다음과 같습니다 sda.

[root@localhost mapper]# fdisk -l 

Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x2cbfcf8a

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          64      512000   83  Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2              64      121602   976248832   8e  Linux LVM

Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0xe8a4e8a4

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1   *           1          64      512000   83  Linux
Partition 1 does not end on cylinder boundary.
/dev/sdb2              64      121602   976248832   8e  Linux LVM

Disk /dev/mapper/VolGroup-lv_root: 53.7 GB, 53687091200 bytes
255 heads, 63 sectors/track, 6527 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes

공연 df -TH:

[root@localhost mapper]# df -TH 
Filesystem                   Type   Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root ext4    53G  1.1G   50G   3% /
tmpfs                        tmpfs  3.8G     0  3.8G   0% /dev/shm
/dev/sdb1                    ext4   508M   34M  449M   7% /boot
/dev/mapper/VolGroup-lv_home ext4   924G  210M  877G   1% /home

내부에 VolGroup-lv_root:

[root@localhost mapper]# cd VolGroup-lv_root 
-bash: cd: VolGroup-lv_root: not directory
[root@localhost mapper]# ll
total used 0
crw-rw----. 1 root root 10, 58 1月  29 16:01 control
lrwxrwxrwx. 1 root root      7 1月  29 16:01 VolGroup-lv_home -> ../dm-2
lrwxrwxrwx. 1 root root      7 1月  29 16:01 VolGroup-lv_root -> ../dm-0
lrwxrwxrwx. 1 root root      7 1月  29 16:01 VolGroup-lv_swap -> ../dm-1

mount확인하다:

[root@localhost /]# mount 
/dev/mapper/VolGroup-lv_root on / type ext4 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw,rootcontext="system_u:object_r:tmpfs_t:s0")
/dev/sdb1 on /boot type ext4 (rw)
/dev/mapper/VolGroup-lv_home on /home type ext4 (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)

# ls -lh /dev/mapper/
total used 0
crw-rw----. 1 root root 10, 58 1月  29 16:01 control
lrwxrwxrwx. 1 root root      7 1月  29 16:01 VolGroup-lv_home -> ../dm-2
lrwxrwxrwx. 1 root root      7 1月  29 16:01 VolGroup-lv_root -> ../dm-0
lrwxrwxrwx. 1 root root      7 1月  29 16:01 VolGroup-lv_swap -> ../dm-1

편집하다

나는 다음을 사용 vgscan한다 vgchange -a y:

[root@localhost mapper]# vgscan
  Reading all physical volumes.  This may take a while...
  Found volume group "VolGroup" using metadata type lvm2
[root@localhost mapper]# vgchange -a y
  3 logical volume(s) in volume group "VolGroup" now active

파일은 /dev/다음과 같습니다:

# ls /dev/
block            input               nvram   ram9      tty10  tty31  tty52    vcs
bsg              kmsg                oldmem  random    tty11  tty32  tty53    vcs1
bus              log                 port    raw       tty12  tty33  tty54    vcs2
char             loop0               ppp     root      tty13  tty34  tty55    vcs3
console          loop1               ptmx    rtc       tty14  tty35  tty56    vcs4
core             loop2               ptp0    rtc0      tty15  tty36  tty57    vcs5
cpu              loop3               ptp1    sda       tty16  tty37  tty58    vcs6
cpu_dma_latency  loop4               pts     sda1      tty17  tty38  tty59    vcsa
crash            loop5               ram0    sda2      tty18  tty39  tty6     vcsa1
disk             loop6               ram1    sdb       tty19  tty4   tty60    vcsa2
dm-0             loop7               ram10   sdb1      tty2   tty40  tty61    vcsa3
dm-1             lp0                 ram11   sdb2      tty20  tty41  tty62    vcsa4
dm-2             lp1                 ram12   sg0       tty21  tty42  tty63    vcsa5
fb               lp2                 ram13   sg1       tty22  tty43  tty7     vcsa6
fb0              lp3                 ram14   shm       tty23  tty44  tty8     vga_arbiter
fd               MAKEDEV             ram15   snapshot  tty24  tty45  tty9     VolGroup
full             mapper              ram2    stderr    tty25  tty46  ttyS0    zero
fuse             mcelog              ram3    stdin     tty26  tty47  ttyS1
hidraw0          mem                 ram4    stdout    tty27  tty48  ttyS2
hidraw1          net                 ram5    systty    tty28  tty49  ttyS3
hpet             network_latency     ram6    tty       tty29  tty5   urandom
hugepages        network_throughput  ram7    tty0      tty3   tty50  usbmon0
hvc0             null                ram8    tty1      tty30  tty51  usbmon1 

enter image description here

아래와 같이 설치하려고 합니다.

[root@localhost VolGroup]# mount /dev/VolGroup/lv_root   /mnt/lv_root_test
mount: mount point /mnt/lv_root_test does not exist
[root@localhost VolGroup]# mount /dev/VolGroup/lv_root   /mnt/lv_root
mount: mount point /mnt/lv_root does not exist

비 었다 /mnt/.


편집 2

나는 lvmdiskscan다음 정보를 표시하는 데 사용합니다.

[root@localhost mapper]# lvmdiskscan
  /dev/ram0             [      16.00 MiB] 
  /dev/loop0            [     930.53 GiB] 
  /dev/root             [      50.00 GiB] 
  /dev/ram1             [      16.00 MiB] 
  /dev/sda1             [     500.00 MiB] 
  /dev/VolGroup/lv_swap [       7.05 GiB] 
  /dev/ram2             [      16.00 MiB] 
  /dev/sda2             [     931.02 GiB] 
  /dev/VolGroup/lv_home [     873.97 GiB] 
  /dev/ram3             [      16.00 MiB] 
  /dev/ram4             [      16.00 MiB] 
  /dev/ram5             [      16.00 MiB] 
  /dev/ram6             [      16.00 MiB] 
  /dev/ram7             [      16.00 MiB] 
  /dev/ram8             [      16.00 MiB] 
  /dev/ram9             [      16.00 MiB] 
  /dev/ram10            [      16.00 MiB] 
  /dev/ram11            [      16.00 MiB] 
  /dev/ram12            [      16.00 MiB] 
  /dev/ram13            [      16.00 MiB] 
  /dev/ram14            [      16.00 MiB] 
  /dev/ram15            [      16.00 MiB] 
  /dev/sdb1             [     500.00 MiB] 
  /dev/sdb2             [     931.02 GiB] LVM physical volume
  3 disks
  20 partitions
  0 LVM physical volume whole disks
  1 LVM physical volume

답변1

vgscan
vgchange -a y

이전 lv버전을 스캔하고 활성화하려면 루트로 다음 명령을 실행하세요. 형태로 장치가 생성됩니다 /dev/volumegroup/logicalvolume.

그럼 mount그들.

mkdir -p /mnt/lv_root
mount /dev/volumegroup/logicalvolume /mnt/lv_root

이전 lvm 설치가 손상된 것 같습니다. vgscan시스템의 모든 볼륨 그룹이 검색되고 lvmdiskscan표시되는 모든 장치가 검색되기 때문입니다 lvm2. 두 출력 모두 시스템에 표시되는 LVM 물리 볼륨과 볼륨 그룹(새로 설치된 LVM)이 하나만 있음을 보여줍니다.

[root@localhost mapper]# lvmdiskscan 
/dev/ram0 [ 16.00 MiB] 
/dev/loop0 [ 930.53 GiB] 
/dev/root [ 50.00 GiB] 
/dev/ram1 [ 16.00 MiB] 
/dev/sda1 [ 500.00 MiB] 
/dev/VolGroup/lv_swap [ 7.05 GiB] 
/dev/sda2 [ 931.02 GiB] 
/dev/VolGroup/lv_home [ 873.97 GiB] 
/dev/sdb1 [ 500.00 MiB] 
/dev/sdb2 [ 931.02 GiB] LVM physical volume 
3 disks 
20 partitions 
0 LVM physical volume whole disks 
1 LVM physical volume

이전 LVM 물리 볼륨을 찾으면 lvmdiskscan다음과 같이 보일 것입니다:

/dev/sda2 [ 931.02 GiB] LVM physical volume 

관련 정보