mdadm 소프트웨어 RAID1 Debian 8 Dell PowerEdge T330 서버를 사용하는 암호화되지 않고 LUKS 암호화된 ext4 파일 시스템에서 쓰기 속도가 느립니다(<2MB/초).

mdadm 소프트웨어 RAID1 Debian 8 Dell PowerEdge T330 서버를 사용하는 암호화되지 않고 LUKS 암호화된 ext4 파일 시스템에서 쓰기 속도가 느립니다(<2MB/초).

mdadm을 사용하여 RAID1 어레이에 / 및 /var 두 개의 파티션이 있는 새 Dell PowerEdge T330에 Debian 8을 설치했습니다. 기본 애플리케이션을 테스트하는 동안 mysql과 tomcat이 중지되었습니다. 읽기 성능은 적당했지만 두 파티션 모두 쓰기 성능이 형편없었습니다. 이는 동일한 방식으로 설정된 두 개의 동일한 서버 중 하나에서 관찰한 것입니다. 어떤 도움이라도 대단히 감사하겠습니다.

성능

root@bcmdit-519:/home/bcmdit# FILE=/tmp/test_data && dd bs=16k \ count=102400 oflag=direct if=/dev/zero of=$FILE && \ rm $FILE && FILE=/var/ tmp/test_data && dd bs=16k \ count=102400 oflag=direct if=/dev/zero of=$FILE && rm $FILE

102400+0 records in
102400+0 records out
1677721600 bytes (1.7 GB) copied, 886.418 s, 1.9 MB/s

102400+0 records in
102400+0 records out
1677721600 bytes (1.7 GB) copied, 894.832 s, 1.9 MB/s

root@bcmdit-519:/home/bcmdit# hdparm -t /dev/sda; hdparm -t /dev/md0; hdparm -t /dev/md1

/dev/sda:

    Timing buffered disk reads: 394 MB in  3.00 seconds = 131.15 MB/sec

/dev/sdb:

    Timing buffered disk reads: 394 MB in  3.01 seconds = 131.05 MB/sec

/dev/md0:

    Timing buffered disk reads: 398 MB in  3.00 seconds = 132.45 MB/sec

/dev/md1:

    Timing buffered disk reads: 318 MB in  3.00 seconds = 106.00 MB/sec

인용하다

https://severfault.com/questions/832117/how-increase-write-speed-of-raid1-mdadm https://wiki.mikejung.biz/Software_RAID RAID1에서는 쓰기 액세스 시간이 느립니다. https://bbs.archlinux.org/viewtopic.php?id=173791 기다리다...

구성

암호화 설정 사용:

root@bcmdit-519:/home/bcmdit# cryptsetup luksDump UUID=1e7b64ac-f187-4fac-9712-8e0dacadfca7|grep -E '비밀번호|해시'

Cipher name:    aes
Cipher mode:    xts-plain64
Hash spec:      sha1

구성 조각

root@bcmdit-519:/home/bcmdit# facter virtual productname lsbdistid \
                     lsbdistrelease processor0 blockdevice_sda_model \  
                     blockdevice_sdb_model bios_version && uname -a && uptime
----------

    bios_version => 2.4.3
    blockdevice_sda_model => ST1000NX0423
    blockdevice_sdb_model => ST1000NX0423
    lsbdistid => Debian
    lsbdistrelease => 8.10
    processor0 => Intel(R) Xeon(R) CPU E3-1230 v6 @ 3.50GHz
    productname => PowerEdge T330
    virtual => physical
    Linux bcmdit-519 3.16.0-4-amd64 #1 SMP Debian 3.16.51-3 (2017-12-13) x86_64 GNU/Linux
     14:45:58 up  2:49,  2 users,  load average: 0.06, 0.17, 0.44

 root@bcmdit-519:/home/bcmdit# grep GRUB_CMDLINE_LINUX_DEFAULT /etc/default/grub

    GRUB_CMDLINE_LINUX_DEFAULT="quiet erst_disable elevator=deadline"

root@bcmdit-519:/home/bcmdit# free -m         

             total       used       free     shared    buffers     cached
Mem:         32202       1532      30670          9         17        369
-/+ buffers/cache:       1145      31056
Swap:            0          0          0

root@bcmdit-519:/home/bcmdit# /dev/sda 인쇄 별도

    Model: ATA ST1000NX0423 (scsi)
    Disk /dev/sda: 1000GB
    Sector size (logical/physical): 512B/512B
    Partition Table: msdos
    Disk Flags: 

    Number  Start   End     Size   Type      File system  Flags
     1      1049kB  500GB   500GB  primary                boot, raid
     2      500GB   1000GB  500GB  extended
     5      500GB   1000GB  500GB  logical                raid

root@bcmdit-519:/home/bcmdit# 인쇄를 위해 /dev/sdb를 분리합니다.

    Model: ATA ST1000NX0423 (scsi)
    Disk /dev/sdb: 1000GB
    Sector size (logical/physical): 512B/512B
    Partition Table: msdos
    Disk Flags: 

    Number  Start   End     Size   Type      File system  Flags
     1      1049kB  500GB   500GB  primary                raid
     2      500GB   1000GB  500GB  extended
     5      500GB   1000GB  500GB  logical                raid

----------

root@bcmdit-519:/home/bcmdit# 고양이 /proc/mdstat

    Personalities : [raid1] 
    md1 : active raid1 sda5[0] sdb5[1]
          488249344 blocks super 1.2 [2/2] [UU]
          bitmap: 3/4 pages [12KB], 65536KB chunk

    md0 : active raid1 sda1[0] sdb1[1]
          488248320 blocks super 1.2 [2/2] [UU]
          bitmap: 2/4 pages [8KB], 65536KB chunk

    unused devices: <none>

root@bcmdit-519:/home/bcmdit# mdadm --query --detail /dev/md0

    /dev/md0:
            Version : 1.2
      Creation Time : Mon Apr 16 13:46:51 2018
         Raid Level : raid1
         Array Size : 488248320 (465.63 GiB 499.97 GB)
      Used Dev Size : 488248320 (465.63 GiB 499.97 GB)
       Raid Devices : 2
      Total Devices : 2
        Persistence : Superblock is persistent

      Intent Bitmap : Internal

        Update Time : Tue May 15 14:26:47 2018
              State : clean 
     Active Devices : 2
    Working Devices : 2
     Failed Devices : 0
      Spare Devices : 0

               Name : bcmdit-519:0  (local to host bcmdit-519)
               UUID : afd3968c:2e8b191d:4504f21e:255b6470
             Events : 1703

        Number   Major   Minor   RaidDevice State
           0       8        1        0      active sync   /dev/sda1
           1       8       17        1      active sync   /dev/sdb1

 root@bcmdit-519:/home/bcmdit# mdadm --query --detail /dev/md1

    /dev/md1: 

            Version : 1.2
      Creation Time : Mon Apr 16 13:47:06 2018
         Raid Level : raid1
         Array Size : 488249344 (465.63 GiB 499.97 GB)
      Used Dev Size : 488249344 (465.63 GiB 499.97 GB)
       Raid Devices : 2
      Total Devices : 2
        Persistence : Superblock is persistent

      Intent Bitmap : Internal

        Update Time : Tue May 15 14:15:11 2018
              State : active 
     Active Devices : 2
    Working Devices : 2
     Failed Devices : 0
      Spare Devices : 0

               Name : bcmdit-519:1  (local to host bcmdit-519)
               UUID : e46f968a:e8fff775:ecee9cfb:4ad88574
             Events : 2659

        Number   Major   Minor   RaidDevice State
           0       8        5        0      active sync   /dev/sda5
           1       8       21        1      active sync   /dev/sdb5

root@bcmdit-519:/home/bcmdit# 고양이 /etc/crypttab

    crypt1 UUID=1e7b64ac-f187-4fac-9712-8e0dacadfca7 /root/.crypt1 luks

root@bcmdit-519:/home/bcmdit# grep -v '^#' /etc/fstab

    UUID=c6baa173-8ea6-4598-a965-eee728a93d69 /               ext4    defaults,errors=remount-ro 0       1
    /dev/sr0        /media/cdrom0   udf,iso9660 user,noauto     0       0
    /dev/mapper/crypt1 /var ext4 defaults,errors=remount-ro 0       2
    /var/swapfile1 none swap sw,nofail 0       0

root@bcmdit-519:/home/bcmdit# smartctl -a /dev/sda|head -n 20

    smartctl 6.4 2014-10-07 r4002 [x86_64-linux-3.16.0-4-amd64] (local build)
    Copyright (C) 2002-14, Bruce Allen, Christian Franke, www.smartmontools.org

    === START OF INFORMATION SECTION ===
    Device Model:     ST1000NX0423
    Serial Number:    W4713QXE
    LU WWN Device Id: 5 000c50 0abb06247
    Add. Product Id:  DELL(tm)
    Firmware Version: NA07
    User Capacity:    1,000,204,886,016 bytes [1.00 TB]
    Sector Size:      512 bytes logical/physical
    Rotation Rate:    7200 rpm
    Form Factor:      2.5 inches
    Device is:        Not in smartctl database [for details use: -P showall]
    ATA Version is:   ACS-3 (minor revision not indicated)
    SATA Version is:  SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s)
    Local Time is:    Tue May 15 14:29:03 2018 PDT
    SMART support is: Available - device has SMART capability.
    SMART support is: Enabled

root@bcmdit-519:/home/bcmdit# smartctl -a /dev/sdb|head -n 20

    smartctl 6.4 2014-10-07 r4002 [x86_64-linux-3.16.0-4-amd64] (local build)
    Copyright (C) 2002-14, Bruce Allen, Christian Franke, www.smartmontools.org

    === START OF INFORMATION SECTION ===
    Device Model:     ST1000NX0423
    Serial Number:    W4714VDQ
    LU WWN Device Id: 5 000c50 0abf99927
    Add. Product Id:  DELL(tm)
    Firmware Version: NA07
    User Capacity:    1,000,204,886,016 bytes [1.00 TB]
    Sector Size:      512 bytes logical/physical
    Rotation Rate:    7200 rpm
    Form Factor:      2.5 inches
    Device is:        Not in smartctl database [for details use: -P showall]
    ATA Version is:   ACS-3 (minor revision not indicated)
    SATA Version is:  SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s)
    Local Time is:    Tue May 15 14:29:11 2018 PDT
    SMART support is: Available - device has SMART capability.
    SMART support is: Enabled

업데이트 1

16M으로

root@bcmdit-519:/tmp# FILE=/tmp/test_data \
&& dd bs=16M count=102 oflag=direct if=/dev/zero of=$FILE \
&& rm $FILE \
&& FILE=/var/tmp/test_data \
&& dd bs=16M count=102 oflag=direct if=/dev/zero of=$FILE \
&& rm $FILE
102+0 records in
102+0 records out
1711276032 bytes (1.7 GB) copied, 16.6394 s, 103 MB/s
102+0 records in
102+0 records out
1711276032 bytes (1.7 GB) copied, 17.8649 s, 95.8 MB/s

업데이트 2 SMART의 Seagate 하드 드라이브 일련 번호는 엔터프라이즈급 하드 드라이브를 나타내는 것으로 확인되었습니다. https://www.cnet.com/products/seagate-enterprise-capacity-2-5-hdd-v-3-1tb-sata-512n/specs/

업데이트 3 드라이브 쓰기 캐시가 꺼져 있는 것을 발견했지만 이를 켜기로 설정했습니다.

hdparm -W1 /dev/sd*

이제 bs=16k를 사용하여 더 나은 결과를 얻습니다.

root@bcmdit-519:/home/bcmdit# FILE=/tmp/test_data && dd bs=16k count=102400 oflag=direct if=/dev/zero of=$FILE && rm $FILE
102400+0 records in         
102400+0 records out
1677721600 bytes (1.7 GB) copied, 14.0708 s, 119 MB/s

업데이트 4

root@ecm-oscar-519:/home/bcmdit# cryptsetup 벤치마크

# Tests are approximate using memory only (no storage IO).
PBKDF2-sha1      1394382 iterations per second
PBKDF2-sha256     923042 iterations per second
PBKDF2-sha512     728177 iterations per second
PBKDF2-ripemd160  804122 iterations per second
PBKDF2-whirlpool  313569 iterations per second
#  Algorithm | Key |  Encryption |  Decryption
     aes-cbc   128b  1149.9 MiB/s  3655.8 MiB/s
 serpent-cbc   128b    99.6 MiB/s   743.4 MiB/s
 twofish-cbc   128b   219.0 MiB/s   400.0 MiB/s
     aes-cbc   256b   867.5 MiB/s  2904.5 MiB/s
 serpent-cbc   256b    99.6 MiB/s   742.6 MiB/s
 twofish-cbc   256b   218.9 MiB/s   399.8 MiB/s
     aes-xts   256b  3615.1 MiB/s  3617.3 MiB/s
 serpent-xts   256b   710.8 MiB/s   705.0 MiB/s
 twofish-xts   256b   388.1 MiB/s   394.5 MiB/s
     aes-xts   512b  2884.9 MiB/s  2888.1 MiB/s
 serpent-xts   512b   710.7 MiB/s   704.7 MiB/s
 twofish-xts   512b   388.0 MiB/s   394.3 MiB/s

답변1

dd를 요청 bs=16K하고 oflag=direct작은 쓰기를 많이 요청하는 경우, 이는 HDD가 잘하지 못하고 SSD가 잘하는 부분입니다.

LVMCache를 사용하여 두 가지 이점을 모두 얻을 수 있습니다(SSD 크기에 따라 다름).

사용 bs=16M여부에 관계없이 oflag쓰기는 RAM에 분할/결합/캐시되어 최적의 크기로 기록됩니다.

dd가 디스크에 직접 쓰기를 사용하는 것이 파일에 쓰는 것보다 느린 이유는 무엇입니까?

예를 들어;

> dd if=/dev/zero of=test.bin bs=16k count=1000 oflag=direct
1000+0 records in
1000+0 records out
16384000 bytes (16 MB, 16 MiB) copied, 3.19453 s, 5.1 MB/s

> dd if=/dev/zero of=test.bin bs=16M count=1 oflag=direct
1+0 records in
1+0 records out
16777216 bytes (17 MB, 16 MiB) copied, 0.291366 s, 57.6 MB/s

> dd if=/dev/zero of=test.bin bs=16k count=1000
1000+0 records in
1000+0 records out
16384000 bytes (16 MB, 16 MiB) copied, 0.0815558 s, 201 MB/s

> uname -r
4.14.41-130

답변2

사용pv(맨 페이지)다음과 같은 유틸리티:

pv --average-rate < /dev/urandom > /mdX-MountPoint/SomeFileName

측정 속도보다 더 효율적인 것으로 입증될 수 있습니다.dd(맨 페이지).

입력 파일을 임의의 데이터로 편집했는데 이는 정적 0보다 낫습니다.


문제는dd(맨 페이지)네, 블록 크기를 조정해야 합니다.

언제pv(맨 페이지)이는 본질적으로 최적의 처리량을 위해 최대 속도를 구축하기 때문에 사실이 아닙니다.

관련 정보