PC(corosync/pacemaker/pcsd)를 사용하여 2노드 물리적 서버 클러스터(HP ProLiant DL560 Gen8)를 구성했습니다. 또한 Fence_ilo4를 사용하여 펜싱을 구성했습니다.
한 노드가 다운되면("DOWN"은 전원이 꺼진 것을 의미함) 이상한 일이 발생하고 두 번째 노드가 종료됩니다. 보호가 자체적으로 종료되어 두 서버가 모두 오프라인 상태가 됩니다.
이 동작을 어떻게 수정할 수 있나요?
wait_for_all: 0
내가 시도한 것은 아래 섹션에 " " 및 " "을 추가하는 것입니다 . 그러나 그것은 여전히 그것을 죽일 것입니다.expected_votes: 1
/etc/corosync/corosync.conf
quorum
어떤 시점에서는 서버 중 하나에서 일부 유지 관리가 수행되며 해당 서버를 종료해야 합니다. 이런 일이 발생하면 다른 노드가 다운되는 것을 원하지 않습니다.
다음은 일부 출력입니다.
[root@kvm_aquila-02 ~]# pcs quorum status
Quorum information
------------------
Date: Fri Jun 28 09:07:18 2019
Quorum provider: corosync_votequorum
Nodes: 2
Node ID: 2
Ring ID: 1/284
Quorate: Yes
Votequorum information
----------------------
Expected votes: 2
Highest expected: 2
Total votes: 2
Quorum: 1
Flags: 2Node Quorate
Membership information
----------------------
Nodeid Votes Qdevice Name
1 1 NR kvm_aquila-01
2 1 NR kvm_aquila-02 (local)
[root@kvm_aquila-02 ~]# pcs config show
Cluster Name: kvm_aquila
Corosync Nodes:
kvm_aquila-01 kvm_aquila-02
Pacemaker Nodes:
kvm_aquila-01 kvm_aquila-02
Resources:
Clone: dlm-clone
Meta Attrs: interleave=true ordered=true
Resource: dlm (class=ocf provider=pacemaker type=controld)
Operations: monitor interval=30s on-fail=fence (dlm-monitor-interval-30s)
start interval=0s timeout=90 (dlm-start-interval-0s)
stop interval=0s timeout=100 (dlm-stop-interval-0s)
Clone: clvmd-clone
Meta Attrs: interleave=true ordered=true
Resource: clvmd (class=ocf provider=heartbeat type=clvm)
Operations: monitor interval=30s on-fail=fence (clvmd-monitor-interval-30s)
start interval=0s timeout=90s (clvmd-start-interval-0s)
stop interval=0s timeout=90s (clvmd-stop-interval-0s)
Group: test_VPS
Resource: test (class=ocf provider=heartbeat type=VirtualDomain)
Attributes: config=/shared/xml/test.xml hypervisor=qemu:///system migration_transport=ssh
Meta Attrs: allow-migrate=true is-managed=true priority=100 target-role=Started
Utilization: cpu=4 hv_memory=4096
Operations: migrate_from interval=0 timeout=120s (test-migrate_from-interval-0)
migrate_to interval=0 timeout=120 (test-migrate_to-interval-0)
monitor interval=10 timeout=30 (test-monitor-interval-10)
start interval=0s timeout=300s (test-start-interval-0s)
stop interval=0s timeout=300s (test-stop-interval-0s)
Stonith Devices:
Resource: kvm_aquila-01 (class=stonith type=fence_ilo4)
Attributes: ipaddr=10.0.4.39 login=fencing passwd=0ToleranciJa pcmk_host_list="kvm_aquila-01 kvm_aquila-02"
Operations: monitor interval=60s (kvm_aquila-01-monitor-interval-60s)
Resource: kvm_aquila-02 (class=stonith type=fence_ilo4)
Attributes: ipaddr=10.0.4.49 login=fencing passwd=0ToleranciJa pcmk_host_list="kvm_aquila-01 kvm_aquila-02"
Operations: monitor interval=60s (kvm_aquila-02-monitor-interval-60s)
Fencing Levels:
Location Constraints:
Ordering Constraints:
start dlm-clone then start clvmd-clone (kind:Mandatory)
Colocation Constraints:
clvmd-clone with dlm-clone (score:INFINITY)
Ticket Constraints:
Alerts:
No alerts defined
Resources Defaults:
No defaults set
Operations Defaults:
No defaults set
Cluster Properties:
cluster-infrastructure: corosync
cluster-name: kvm_aquila
dc-version: 1.1.19-8.el7_6.4-c3c624ea3d
have-watchdog: false
last-lrm-refresh: 1561619537
no-quorum-policy: ignore
stonith-enabled: true
Quorum:
Options:
wait_for_all: 0
[root@kvm_aquila-02 ~]# pcs cluster status
Cluster Status:
Stack: corosync
Current DC: kvm_aquila-02 (version 1.1.19-8.el7_6.4-c3c624ea3d) - partition with quorum
Last updated: Fri Jun 28 09:14:11 2019
Last change: Thu Jun 27 16:23:44 2019 by root via cibadmin on kvm_aquila-01
2 nodes configured
7 resources configured
PCSD Status:
kvm_aquila-02: Online
kvm_aquila-01: Online
[root@kvm_aquila-02 ~]# pcs status
Cluster name: kvm_aquila
Stack: corosync
Current DC: kvm_aquila-02 (version 1.1.19-8.el7_6.4-c3c624ea3d) - partition with quorum
Last updated: Fri Jun 28 09:14:31 2019
Last change: Thu Jun 27 16:23:44 2019 by root via cibadmin on kvm_aquila-01
2 nodes configured
7 resources configured
Online: [ kvm_aquila-01 kvm_aquila-02 ]
Full list of resources:
kvm_aquila-01 (stonith:fence_ilo4): Started kvm_aquila-01
kvm_aquila-02 (stonith:fence_ilo4): Started kvm_aquila-02
Clone Set: dlm-clone [dlm]
Started: [ kvm_aquila-01 kvm_aquila-02 ]
Clone Set: clvmd-clone [clvmd]
Started: [ kvm_aquila-01 kvm_aquila-02 ]
Resource Group: test_VPS
test (ocf::heartbeat:VirtualDomain): Started kvm_aquila-01
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
답변1
두 노드를 격리할 수 있도록 STONITH 장치를 구성한 것으로 보입니다. 또한 지정된 노드 격리를 담당하는 격리 에이전트가 동일한 노드에서 실행되도록 하는 위치 제약 조건이 없습니다(STONITH가 자살함). 이는 나쁜 습관입니다.
다음과 같이 STONITH 장치 및 위치 제한을 구성해 보세요.
pcs stonith create kvm_aquila-01 fence_ilo4 pcmk_host_list=kvm_aquila-01 ipaddr=10.0.4.39 login=fencing passwd=0ToleranciJa op monitor interval=60s
pcs stonith create kvm_aquila-02 fence_ilo4 pcmk_host_list=kvm_aquila-02 ipaddr=10.0.4.49 login=fencing passwd=0ToleranciJa op monitor interval=60s
pcs constraint location kvm_aquila-01 avoids kvm_aquila-01=INFINITY
pcs constraint location kvm_aquila-02 avoids kvm_aquila-02=INFINITY