CEPH에서 OSD를 찾을 수 없습니다.

CEPH에서 OSD를 찾을 수 없습니다.

cephadm부트스트랩을 사용하여 ceph 클러스터를 설치했습니다.

인벤토리를 통해 디스크를 볼 수 있지만 장치 목록에는 표시되지 않습니다. 왜 그럴까요? 클러스터에 디스크를 추가하는 방법은 무엇입니까?

root@RX570:~# ceph-volume inventory

Device Path               Size         rotates available Model name
/dev/sdl                  7.28 TB      True    True      USB3.0
/dev/sdm                  7.28 TB      True    True      USB3.0
/dev/sdn                  7.28 TB      True    True      USB3.0
/dev/sdo                  7.28 TB      True    True      USB3.0
/dev/sdp                  7.28 TB      True    True      USB3.0
/dev/nvme0n1              1.82 TB      False   False     Samsung SSD 980 PRO 2TB
/dev/sda                  3.64 TB      False   False     Samsung SSD 860
/dev/sdb                  16.37 TB     True    False     USB3.0
/dev/sdc                  16.37 TB     True    False     USB3.0
/dev/sdd                  16.37 TB     True    False     USB3.0
/dev/sde                  16.37 TB     True    False     USB3.0
/dev/sdf                  16.37 TB     True    False     USB3.0
/dev/sdg                  16.37 TB     True    False     USB3.0
/dev/sdh                  16.37 TB     True    False     USB3.0
/dev/sdi                  16.37 TB     True    False     USB3.0
/dev/sdj                  16.37 TB     True    False     USB3.0
/dev/sdk                  16.37 TB     True    False     USB3.0

root@RX570:~# ceph orch device ls
root@RX570:~# 

root@RX570:~# ceph orch host ls
HOST   ADDR           LABELS  STATUS  
RX570  192.168.1.227  _admin          
1 hosts in cluster

root@RX570:~# docker ps
CONTAINER ID   IMAGE                                     COMMAND                  CREATED              STATUS              PORTS     NAMES
8bee4afbafce   quay.io/ceph/ceph:v17                     "/usr/bin/ceph-mon -…"   9 seconds ago        Up 9 seconds                  ceph-2243dcbe-9494-11ed-953a-e14796764522-mon-RX570
e4c133a3b1e8   quay.io/ceph/ceph                         "/usr/bin/ceph-crash…"   About a minute ago   Up About a minute             ceph-b1dee40a-94a7-11ed-a3c1-29bb7e5ec517-crash-RX570
f81e05a1b7d4   quay.io/ceph/ceph                         "/usr/bin/ceph-crash…"   About a minute ago   Up About a minute             ceph-86827f26-94aa-11ed-a3c1-29bb7e5ec517-crash-RX570
a3bb6d078fd5   quay.io/ceph/ceph                         "/usr/bin/ceph-crash…"   About a minute ago   Up About a minute             ceph-ddbfff1c-94ef-11ed-a3c1-29bb7e5ec517-crash-RX570
9615b2f3fd22   quay.io/ceph/ceph                         "/usr/bin/ceph-crash…"   About a minute ago   Up About a minute             ceph-2243dcbe-9494-11ed-953a-e14796764522-crash-RX570
0c717d30704e   quay.io/ceph/ceph                         "/usr/bin/ceph-crash…"   About a minute ago   Up About a minute             ceph-3d0a8c9c-94a2-11ed-a3c1-29bb7e5ec517-crash-RX570
494f07c609d8   quay.io/ceph/ceph-grafana:8.3.5           "/bin/sh -c 'grafana…"   25 minutes ago       Up 25 minutes                 ceph-9b740ba0-94f2-11ed-a3c1-29bb7e5ec517-grafana-RX570
9ad68d8eecca   quay.io/prometheus/alertmanager:v0.23.0   "/bin/alertmanager -…"   25 minutes ago       Up 25 minutes                 ceph-9b740ba0-94f2-11ed-a3c1-29bb7e5ec517-alertmanager-RX570
f39f9290b628   quay.io/prometheus/prometheus:v2.33.4     "/bin/prometheus --c…"   26 minutes ago       Up 26 minutes                 ceph-9b740ba0-94f2-11ed-a3c1-29bb7e5ec517-prometheus-RX570
b0b1713c4200   quay.io/ceph/ceph                         "/usr/bin/ceph-mgr -…"   26 minutes ago       Up 26 minutes                 ceph-9b740ba0-94f2-11ed-a3c1-29bb7e5ec517-mgr-RX570-ztegxs
43f2e378e521   quay.io/ceph/ceph                         "/usr/bin/ceph-crash…"   26 minutes ago       Up 26 minutes                 ceph-9b740ba0-94f2-11ed-a3c1-29bb7e5ec517-crash-RX570
b88ecf269889   quay.io/ceph/ceph:v17                     "/usr/bin/ceph-mgr -…"   28 minutes ago       Up 28 minutes                 ceph-9b740ba0-94f2-11ed-a3c1-29bb7e5ec517-mgr-RX570-whcycj
25c7ac170460   quay.io/ceph/ceph:v17                     "/usr/bin/ceph-mon -…"   28 minutes ago       Up 28 minutes                 ceph-9b740ba0-94f2-11ed-a3c1-29bb7e5ec517-mon-RX570
84adac6e89d8   quay.io/prometheus/node-exporter:v1.3.1   "/bin/node_exporter …"   31 minutes ago       Up 31 minutes                 ceph-12bf4064-94f1-11ed-a3c1-29bb7e5ec517-node-exporter-RX570
b7601e5b4611   quay.io/ceph/ceph                         "/usr/bin/ceph-crash…"   31 minutes ago       Up 31 minutes                 ceph-12bf4064-94f1-11ed-a3c1-29bb7e5ec517-crash-RX570

root@RX570:~# ceph status
  cluster:
    id:     9b740ba0-94f2-11ed-a3c1-29bb7e5ec517
    health: HEALTH_WARN
            Failed to place 1 daemon(s)
            failed to probe daemons or devices
            OSD count 0 < osd_pool_default_size 2
 
  services:
    mon: 1 daemons, quorum RX570 (age 28m)
    mgr: RX570.whcycj(active, since 26m), standbys: RX570.ztegxs
    osd: 0 osds: 0 up, 0 in
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:     
 
root@RX570:~# ceph health
HEALTH_WARN Failed to place 1 daemon(s); failed to probe daemons or devices; OSD count 0 < osd_pool_default_size 2

답변1

모든 호스트를 다음에 추가했는지 확인하세요 /etc/hosts.

# Ceph
<pulic_network_ip> ceph-1
<pulic_network_ip> ceph-2
<pulic_network_ip> ceph-3

그런 다음 클러스터에 ceph-node를 추가해야 합니다.

ceph cephadm get-pub-key > ~/ceph.pub
ssh-copy-id -f -i ~/ceph.pub root@<host_ip>

ceph orch host add <host_name> <host_ip>
ceph orch host label add <host_name> <role>

그런 다음 OSD 데몬을 디스크에 추가합니다.

ceph orch daemon add osd ceph-1:/dev/sdm
ceph orch daemon add osd ceph-1:/dev/sdn

이 방법도 사용 가능하지만 사용하지 않는 것이 좋습니다.

ceph orch apply osd --all-available-devices

관련 정보