arm架构ceph pacific部署

devtools/2024/10/18 12:24:39/

背景

合作伙伴实验室的华为私有云原来使用单点的nfs做为存储设备,现有两方面考量,业务需要使用oss了,k8s集群及其他机器也需要一套可扩展的分布式文件系统

ceph_4">部署ceph

初始机器配置规划

IP配置主机名Role
10.17.3.144c8g1T数据盘ceph-node01.xx.localmon1 mgr1 node01
10.17.3.154c8g1T数据盘ceph-node02.xx.localmon2 mgr12 node02
10.17.3.164c8g1T数据盘ceph-node03.xx.localmon3 mgr3 node03

所有节点执行:

节点上的硬盘需要做ceph osd的需要需要取消挂载

节点时间配置

apt install apt-transport-https ca-certificates curl software-properties-common -y
vim /etc/chrony/chrony.conf
server ntp.xx.xx.cn minpoll 4 maxpoll 10 iburst # 内部ntp服务器
systemctl restart chronydroot@ceph-node01:/etc/ceph-cluster# chronyc sources -v
210 Number of sources = 1.-- Source mode  '^' = server, '=' = peer, '#' = local clock./ .- Source state '*' = current synced, '+' = combined , '-' = not combined,
| /   '?' = unreachable, 'x' = time may be in error, '~' = time too variable.
||                                                 .- xxxx [ yyyy ] +/- zzzz
||      Reachability register (octal) -.           |  xxxx = adjusted offset,
||      Log2(Polling interval) --.      |          |  yyyy = measured offset,
||                                \     |          |  zzzz = estimated error.
||                                 |    |           \
MS Name/IP address         Stratum Poll Reach LastRx Last sample              
===============================================================================
^* 100.xx.0.35                  4   8   377   177  +3929ns[+1073ns] +/-  273msroot@ceph-node01:/etc/ceph-cluster# tail -n 3 /etc/hosts
10.17.3.14 ceph-node01.xx.local ceph-node01
10.17.3.15 ceph-node02.xx.local ceph-node02
10.17.3.16 ceph-node03.xx.local ceph-node03

cephdeploy_42">使用ceph-deploy部署

curl -x socks5://10.17.3.154:7891 -LO https://download.ceph.com/keys/release.asc
apt-key add release.asc
echo "deb https://download.ceph.com/debian-pacific/ bionic main" | tee /etc/apt/sources.list.d/ceph.list
# 创建普通账户
groupadd -r -g 2088 cephadmin && useradd -r -m -s /bin/bash -u 2088 -g 2088 cephadmin && echo "cephadmin:xx" | chpasswd
echo "cephadmin ALL=(ALL:ALL) NOPASSWD: ALL" >> /etc/sudoers
su cephadminapt install ceph-common -y
mkdir -pv /etc/ceph-clusterceph-deploy install --release pacific ceph-node01
ceph-deploy install --release pacific ceph-node02ceph quorum_status --format json-pretty

集群deploy节点初始化

cephadmin@ceph-node01:/etc/ceph-cluster$ sudo ceph-deploy new --cluster-network 10.17.3.0/24 --public-network 10.17.3.0/24 ceph-node1.xx.local
sudo: unable to resolve host ceph-node01: Resource temporarily unavailable
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadmin/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/local/bin/ceph-deploy new --cluster-network 10.17.3.0/24 --public-network 10.17.3.0/24 ceph-node1.xx.local
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0xffffb6791c20>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  ssh_copykey                   : True
[ceph_deploy.cli][INFO  ]  mon                           : ['ceph-node1.xx.local']
[ceph_deploy.cli][INFO  ]  func                          : <function new at 0xffffb6772410>
[ceph_deploy.cli][INFO  ]  public_network                : 10.17.3.0/24
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  cluster_network               : 10.17.3.0/24
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  fsid                          : None
[ceph_deploy.new][DEBUG ] Creating new cluster named ceph
[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds
[ceph-node1.xx.local][DEBUG ] connected to host: ceph-node01
[ceph-node1.xx.local][INFO  ] Running command: ssh -CT -o BatchMode=yes ceph-node1.xx.local
[ceph_deploy.new][WARNIN] could not connect via SSH
[ceph_deploy.new][INFO  ] will connect again with password prompt
root@ceph-node1.xx.local's password:
Permission denied, please try again.
root@ceph-node1.xx.local's password:
[ceph-node1.xx.local][DEBUG ] connected to host: ceph-node1.xx.local
[ceph-node1.xx.local][DEBUG ] detect platform information from remote host
[ceph-node1.xx.local][DEBUG ] detect machine type
[ceph_deploy.new][INFO  ] adding public keys to authorized_keys
[ceph-node1.xx.local][DEBUG ] append contents to file
root@ceph-node1.xx.local's password:
root@ceph-node1.xx.local's password:
[ceph-node1.xx.local][DEBUG ] connected to host: ceph-node1.xx.local
[ceph-node1.xx.local][DEBUG ] detect platform information from remote host
[ceph-node1.xx.local][DEBUG ] detect machine type
[ceph-node1.xx.local][DEBUG ] find the location of an executable
[ceph-node1.xx.local][INFO  ] Running command: /bin/ip link show
[ceph-node1.xx.local][INFO  ] Running command: /bin/ip addr show
[ceph-node1.xx.local][DEBUG ] IP addresses found: [u'10.108.101.32', u'10.104.61.120', u'10.98.52.88', u'10.244.24.0', u'10.244.24.1', u'10.99.115.16', u'10.106.43.191', u'10.104.75.139', u'10.105.7.41', u'10.100.142.181', u'10.97.252.180', u'10.110.23.237', u'10.98.213.254', u'10.96.0.1', u'10.101.27.103', u'10.99.3.237', u'10.97.241.24', u'10.17.3.14', u'10.110.31.40', u'10.109.24.221', u'10.97.44.182', u'10.99.46.158', u'10.100.68.217', u'10.96.87.174', u'10.97.255.233', u'10.111.118.0', u'10.96.0.10', u'10.96.23.220', u'10.105.34.53', u'10.106.170.182', u'10.106.145.33']
[ceph_deploy.new][DEBUG ] Resolving host ceph-node1.xx.local
[ceph_deploy.new][DEBUG ] Monitor ceph-node1 at 10.17.3.14
[ceph_deploy.new][DEBUG ] Monitor initial members are ['ceph-node1']
[ceph_deploy.new][DEBUG ] Monitor addrs are [u'10.17.3.14']
[ceph_deploy.new][DEBUG ] Creating a random mon key...
[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...
[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...
cephadmin@ceph-node01:/etc/ceph-cluster$ ls
ceph.conf  ceph-deploy-ceph.log  ceph.mon.keyring

node节点初始化

sudo ceph-deploy install --no-adjust-repos --nogpgcheck ceph-node1.xx.local ceph-node2.xx.local ceph-node3.xx.localsudo: unable to resolve host ceph-node01: Resource temporarily unavailable
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadmin/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/local/bin/ceph-deploy install --no-adjust-repos --nogpgcheck ceph-node1.xx.local ceph-node2.xx.local ceph-node3.xx.local
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  testing                       : None
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0xffff9f33dc80>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  dev_commit                    : None
[ceph_deploy.cli][INFO  ]  install_mds                   : False
[ceph_deploy.cli][INFO  ]  stable                        : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  adjust_repos                  : False
[ceph_deploy.cli][INFO  ]  func                          : <function install at 0xffff9f3fac50>
[ceph_deploy.cli][INFO  ]  install_mgr                   : False
[ceph_deploy.cli][INFO  ]  install_all                   : False
[ceph_deploy.cli][INFO  ]  repo                          : False
[ceph_deploy.cli][INFO  ]  host                          : ['ceph-node1.xx.local', 'ceph-node2.xx.local', 'ceph-node3.xx.local']
[ceph_deploy.cli][INFO  ]  install_rgw                   : False
[ceph_deploy.cli][INFO  ]  install_tests                 : False
[ceph_deploy.cli][INFO  ]  repo_url                      : None
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  install_osd                   : False
[ceph_deploy.cli][INFO  ]  version_kind                  : stable
[ceph_deploy.cli][INFO  ]  install_common                : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  dev                           : master
[ceph_deploy.cli][INFO  ]  nogpgcheck                    : True
[ceph_deploy.cli][INFO  ]  local_mirror                  : None
[ceph_deploy.cli][INFO  ]  release                       : None
[ceph_deploy.cli][INFO  ]  install_mon                   : False
[ceph_deploy.cli][INFO  ]  gpg_url                       : None
[ceph_deploy.install][DEBUG ] Installing stable version mimic on cluster ceph hosts ceph-node1.xx.local ceph-node2.xx.local ceph-node3.xx.local
[ceph_deploy.install][DEBUG ] Detecting platform for host ceph-node1.xx.local ...
root@ceph-node1.xx.local's password:
root@ceph-node1.xx.local's password:
[ceph-node1.xx.local][DEBUG ] connected to host: ceph-node1.xx.local
[ceph-node1.xx.local][DEBUG ] detect platform information from remote host
[ceph-node1.xx.local][DEBUG ] detect machine type
[ceph_deploy.install][INFO  ] Distro info: Ubuntu 18.04 bionic
[ceph-node1.xx.local][INFO  ] installing Ceph on ceph-node1.xx.local
[ceph-node1.xx.local][INFO  ] Running command: env DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get --assume-yes -q update
[ceph-node1.xx.local][DEBUG ] Hit:1 https://mirrors.ustc.edu.cn/kubernetes/core:/stable:/v1.30/deb  InRelease
[ceph-node1.xx.local][DEBUG ] Hit:2 http://ports.ubuntu.com/ubuntu-ports bionic InRelease
[ceph-node1.xx.local][DEBUG ] Hit:3 http://ports.ubuntu.com/ubuntu-ports bionic-updates InRelease
[ceph-node1.xx.local][DEBUG ] Hit:4 http://ports.ubuntu.com/ubuntu-ports bionic-backports InRelease
[ceph-node1.xx.local][DEBUG ] Hit:5 http://ports.ubuntu.com/ubuntu-ports bionic-security InRelease
[ceph-node1.xx.local][DEBUG ] Hit:6 https://download.ceph.com/debian-pacific bionic InRelease
[ceph-node1.xx.local][DEBUG ] Reading package lists...
[ceph-node1.xx.local][INFO  ] Running command: env DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get --assume-yes -q --no-install-recommends install ca-certificates apt-transport-https
[ceph-node1.xx.local][DEBUG ] Reading package lists...
[ceph-node1.xx.local][DEBUG ] Building dependency tree...
[ceph-node1.xx.local][DEBUG ] Reading state information...
[ceph-node1.xx.local][DEBUG ] ca-certificates is already the newest version (20230311ubuntu0.18.04.1).
[ceph-node1.xx.local][DEBUG ] apt-transport-https is already the newest version (1.6.17).
[ceph-node1.xx.local][DEBUG ] 0 upgraded, 0 newly installed, 0 to remove and 340 not upgraded.
[ceph-node1.xx.local][INFO  ] Running command: env DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get --assume-yes -q update
[ceph-node1.xx.local][DEBUG ] Hit:1 https://mirrors.ustc.edu.cn/kubernetes/core:/stable:/v1.30/deb  InRelease
[ceph-node1.xx.local][DEBUG ] Hit:2 http://ports.ubuntu.com/ubuntu-ports bionic InRelease
[ceph-node1.xx.local][DEBUG ] Hit:3 https://download.ceph.com/debian-pacific bionic InRelease
[ceph-node1.xx.local][DEBUG ] Hit:4 http://ports.ubuntu.com/ubuntu-ports bionic-updates InRelease
[ceph-node1.xx.local][DEBUG ] Hit:5 http://ports.ubuntu.com/ubuntu-ports bionic-backports InRelease
[ceph-node1.xx.local][DEBUG ] Hit:6 http://ports.ubuntu.com/ubuntu-ports bionic-security InRelease
[ceph-node1.xx.local][DEBUG ] Reading package lists...
[ceph-node1.xx.local][INFO  ] Running command: env DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get --assume-yes -q --no-install-recommends install ceph ceph-osd ceph-mds ceph-mon radosgw
[ceph-node1.xx.local][DEBUG ] Reading package lists...
[ceph-node1.xx.local][DEBUG ] Building dependency tree...
[ceph-node1.xx.local][DEBUG ] Reading state information...
[ceph-node1.xx.local][DEBUG ] The following packages were automatically installed and are no longer required:
[ceph-node1.xx.local][DEBUG ]   formencode-i18n libpython2.7 python-asn1crypto python-bcrypt python-bs4
[ceph-node1.xx.local][DEBUG ]   python-ceph-argparse python-certifi python-cffi-backend python-chardet
[ceph-node1.xx.local][DEBUG ]   python-cherrypy3 python-cryptography python-dnspython python-enum34
[ceph-node1.xx.local][DEBUG ]   python-formencode python-idna python-ipaddress python-jinja2 python-logutils
[ceph-node1.xx.local][DEBUG ]   python-mako python-markupsafe python-openssl python-paste python-pastedeploy
[ceph-node1.xx.local][DEBUG ]   python-pecan python-pkg-resources python-prettytable python-rbd
[ceph-node1.xx.local][DEBUG ]   python-requests python-simplegeneric python-simplejson python-singledispatch
[ceph-node1.xx.local][DEBUG ]   python-six python-tempita python-urllib3 python-waitress python-webob
[ceph-node1.xx.local][DEBUG ]   python-webtest python-werkzeug
[ceph-node1.xx.local][DEBUG ] Use 'apt autoremove' to remove them.
[ceph-node1.xx.local][DEBUG ] The following additional packages will be installed:
[ceph-node1.xx.local][DEBUG ]   ceph-base ceph-common ceph-mgr ceph-mgr-modules-core libcephfs2 libjaeger
[ceph-node1.xx.local][DEBUG ]   liblua5.3-0 librabbitmq4 librados2 libradosstriper1 librbd1 librdkafka1
[ceph-node1.xx.local][DEBUG ]   librdmacm1 librgw2 libsqlite3-mod-ceph python3-bcrypt python3-bs4
[ceph-node1.xx.local][DEBUG ]   python3-ceph-argparse python3-ceph-common python3-cephfs python3-cherrypy3
[ceph-node1.xx.local][DEBUG ]   python3-dateutil python3-distutils python3-jwt python3-lib2to3
[ceph-node1.xx.local][DEBUG ]   python3-logutils python3-mako python3-markupsafe python3-paste
[ceph-node1.xx.local][DEBUG ]   python3-pastedeploy python3-pecan python3-prettytable python3-rados
[ceph-node1.xx.local][DEBUG ]   python3-rbd python3-rgw python3-simplegeneric python3-singledispatch
[ceph-node1.xx.local][DEBUG ]   python3-tempita python3-waitress python3-webob python3-webtest
[ceph-node1.xx.local][DEBUG ]   python3-werkzeug
[ceph-node1.xx.local][DEBUG ] Suggested packages:
[ceph-node1.xx.local][DEBUG ]   python3-influxdb python3-crypto python3-beaker python-mako-doc httpd-wsgi
[ceph-node1.xx.local][DEBUG ]   libapache2-mod-python libapache2-mod-scgi libjs-mochikit python-pecan-doc
[ceph-node1.xx.local][DEBUG ]   python-waitress-doc python-webob-doc python-webtest-doc ipython3
[ceph-node1.xx.local][DEBUG ]   python3-lxml python3-termcolor python3-watchdog python-werkzeug-doc
[ceph-node1.xx.local][DEBUG ] Recommended packages:
[ceph-node1.xx.local][DEBUG ]   nvme-cli smartmontools ceph-fuse ceph-mgr-dashboard
[ceph-node1.xx.local][DEBUG ]   ceph-mgr-diskprediction-local ceph-mgr-k8sevents ceph-mgr-cephadm
[ceph-node1.xx.local][DEBUG ]   python3-lxml python3-routes python3-simplejson python3-pastescript
[ceph-node1.xx.local][DEBUG ]   python3-pyinotify
[ceph-node1.xx.local][DEBUG ] The following packages will be REMOVED:
[ceph-node1.xx.local][DEBUG ]   python-cephfs python-rados python-rgw
[ceph-node1.xx.local][DEBUG ] The following NEW packages will be installed:
[ceph-node1.xx.local][DEBUG ]   ceph-mgr-modules-core libjaeger liblua5.3-0 librabbitmq4 librdkafka1
[ceph-node1.xx.local][DEBUG ]   librdmacm1 libsqlite3-mod-ceph python3-bcrypt python3-bs4
[ceph-node1.xx.local][DEBUG ]   python3-ceph-argparse python3-ceph-common python3-cephfs python3-cherrypy3
[ceph-node1.xx.local][DEBUG ]   python3-dateutil python3-distutils python3-jwt python3-lib2to3
[ceph-node1.xx.local][DEBUG ]   python3-logutils python3-mako python3-markupsafe python3-paste
[ceph-node1.xx.local][DEBUG ]   python3-pastedeploy python3-pecan python3-prettytable python3-rados
[ceph-node1.xx.local][DEBUG ]   python3-rbd python3-rgw python3-simplegeneric python3-singledispatch
[ceph-node1.xx.local][DEBUG ]   python3-tempita python3-waitress python3-webob python3-webtest
[ceph-node1.xx.local][DEBUG ]   python3-werkzeug
[ceph-node1.xx.local][DEBUG ] The following packages will be upgraded:
[ceph-node1.xx.local][DEBUG ]   ceph ceph-base ceph-common ceph-mds ceph-mgr ceph-mon ceph-osd libcephfs2
[ceph-node1.xx.local][DEBUG ]   librados2 libradosstriper1 librbd1 librgw2 radosgw
[ceph-node1.xx.local][DEBUG ] 13 upgraded, 34 newly installed, 3 to remove and 327 not upgraded.
[ceph-node1.xx.local][DEBUG ] Need to get 70.2 MB of archives.
[ceph-node1.xx.local][DEBUG ] After this operation, 117 MB of additional disk space will be used.
[ceph-node1.xx.local][DEBUG ] Get:1 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 librdmacm1 arm64 17.1-1ubuntu0.2 [49.1 kB]
[ceph-node1.xx.local][DEBUG ] Get:2 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 liblua5.3-0 arm64 5.3.3-1ubuntu0.18.04.1 [105 kB]
[ceph-node1.xx.local][DEBUG ] Get:3 http://ports.ubuntu.com/ubuntu-ports bionic-updates/universe arm64 librabbitmq4 arm64 0.8.0-1ubuntu0.18.04.2 [30.3 kB]
[ceph-node1.xx.local][DEBUG ] Get:4 http://ports.ubuntu.com/ubuntu-ports bionic/universe arm64 librdkafka1 arm64 0.11.3-1build1 [245 kB]
[ceph-node1.xx.local][DEBUG ] Get:5 https://download.ceph.com/debian-pacific bionic/main arm64 libradosstriper1 arm64 16.2.15-1bionic [387 kB]
[ceph-node1.xx.local][DEBUG ] Get:6 http://ports.ubuntu.com/ubuntu-ports bionic/main arm64 python3-dateutil all 2.6.1-1 [52.3 kB]
[ceph-node1.xx.local][DEBUG ] Get:7 http://ports.ubuntu.com/ubuntu-ports bionic/universe arm64 python3-bcrypt arm64 3.1.4-2 [25.3 kB]
[ceph-node1.xx.local][DEBUG ] Get:8 http://ports.ubuntu.com/ubuntu-ports bionic/universe arm64 python3-cherrypy3 all 8.9.1-2 [160 kB]
[ceph-node1.xx.local][DEBUG ] Get:9 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 python3-lib2to3 all 3.6.9-1~18.04 [77.4 kB]
[ceph-node1.xx.local][DEBUG ] Get:10 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 python3-distutils all 3.6.9-1~18.04 [144 kB]
[ceph-node1.xx.local][DEBUG ] Get:11 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 python3-jwt all 1.5.3+ds1-1ubuntu0.1 [16.6 kB]
[ceph-node1.xx.local][DEBUG ] Get:12 http://ports.ubuntu.com/ubuntu-ports bionic/universe arm64 python3-logutils all 0.3.3-5 [16.7 kB]
[ceph-node1.xx.local][DEBUG ] Get:13 http://ports.ubuntu.com/ubuntu-ports bionic/main arm64 python3-markupsafe arm64 1.0-1build1 [13.2 kB]
[ceph-node1.xx.local][DEBUG ] Get:14 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 python3-mako all 1.0.7+ds1-1ubuntu0.2 [59.4 kB]
[ceph-node1.xx.local][DEBUG ] Get:15 http://ports.ubuntu.com/ubuntu-ports bionic/universe arm64 python3-simplegeneric all 0.8.1-1 [11.5 kB]
[ceph-node1.xx.local][DEBUG ] Get:16 http://ports.ubuntu.com/ubuntu-ports bionic/universe arm64 python3-singledispatch all 3.4.0.3-2 [7,022 B]
[ceph-node1.xx.local][DEBUG ] Get:17 http://ports.ubuntu.com/ubuntu-ports bionic/main arm64 python3-webob all 1:1.7.3-2fakesync1 [64.3 kB]
[ceph-node1.xx.local][DEBUG ] Get:18 https://download.ceph.com/debian-pacific bionic/main arm64 radosgw arm64 16.2.15-1bionic [9,564 kB]
[ceph-node1.xx.local][DEBUG ] Get:19 http://ports.ubuntu.com/ubuntu-ports bionic/main arm64 python3-bs4 all 4.6.0-1 [67.8 kB]
[ceph-node1.xx.local][DEBUG ] Get:20 http://ports.ubuntu.com/ubuntu-ports bionic/main arm64 python3-waitress all 1.0.1-1 [53.4 kB]
[ceph-node1.xx.local][DEBUG ] Get:21 http://ports.ubuntu.com/ubuntu-ports bionic/main arm64 python3-tempita all 0.5.2-2 [13.9 kB]
[ceph-node1.xx.local][DEBUG ] Get:22 http://ports.ubuntu.com/ubuntu-ports bionic/main arm64 python3-paste all 2.0.3+dfsg-4ubuntu1 [456 kB]
[ceph-node1.xx.local][DEBUG ] Get:23 http://ports.ubuntu.com/ubuntu-ports bionic/main arm64 python3-pastedeploy all 1.5.2-4 [13.4 kB]
[ceph-node1.xx.local][DEBUG ] Get:24 http://ports.ubuntu.com/ubuntu-ports bionic/main arm64 python3-webtest all 2.0.28-1ubuntu1 [27.9 kB]
[ceph-node1.xx.local][DEBUG ] Get:25 http://ports.ubuntu.com/ubuntu-ports bionic/universe arm64 python3-pecan all 1.2.1-2 [86.1 kB]
[ceph-node1.xx.local][DEBUG ] Get:26 http://ports.ubuntu.com/ubuntu-ports bionic-updates/universe arm64 python3-werkzeug all 0.14.1+dfsg1-1ubuntu0.2 [175 kB]
[ceph-node1.xx.local][DEBUG ] Get:27 http://ports.ubuntu.com/ubuntu-ports bionic/main arm64 python3-prettytable all 0.7.2-3 [19.7 kB].....
[ceph-node3.xx.local][DEBUG ] Setting up ceph (16.2.15-1bionic) ...
[ceph-node3.xx.local][DEBUG ] Processing triggers for systemd (237-3ubuntu10.31) ...
[ceph-node3.xx.local][DEBUG ] Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
[ceph-node3.xx.local][DEBUG ] Processing triggers for ureadahead (0.100.0-21) ...
[ceph-node3.xx.local][DEBUG ] ureadahead will be reprofiled on next reboot
[ceph-node3.xx.local][DEBUG ] Processing triggers for libc-bin (2.27-3ubuntu1) ...
[ceph-node3.xx.local][INFO  ] Running command: ceph --version
[ceph-node3.xx.local][DEBUG ] ceph version 16.2.15 (618f440892089921c3e944a991122ddc44e60516) pacific (stable)

ceph集群添加ceph-mon服务,mon初始化

cephadmin@ceph-node01:/etc/ceph-cluster$ sudo ceph-deploy  mon create-initial
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadmin/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/local/bin/ceph-deploy mon create-initial
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create-initial
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0xffff82b5ceb0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function mon at 0xffff82bc9cd0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  keyrings                      : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts ceph-node01
[ceph_deploy.mon][DEBUG ] detecting platform for host ceph-node01 ...
[ceph-node01][DEBUG ] connected to host: ceph-node01
[ceph-node01][DEBUG ] detect platform information from remote host
[ceph-node01][DEBUG ] detect machine type
[ceph-node01][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO  ] distro info: Ubuntu 18.04 bionic
[ceph-node01][DEBUG ] determining if provided host has same hostname in remote
[ceph-node01][DEBUG ] get remote short hostname
[ceph-node01][DEBUG ] deploying mon to ceph-node01
[ceph-node01][DEBUG ] get remote short hostname
[ceph-node01][DEBUG ] remote hostname: ceph-node01
[ceph-node01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-node01][DEBUG ] create the mon path if it does not exist
[ceph-node01][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-ceph-node01/done
[ceph-node01][DEBUG ] create a done file to avoid re-doing the mon deployment
[ceph-node01][DEBUG ] create the init path if it does not exist
[ceph-node01][INFO  ] Running command: systemctl enable ceph.target
[ceph-node01][INFO  ] Running command: systemctl enable ceph-mon@ceph-node01
[ceph-node01][INFO  ] Running command: systemctl start ceph-mon@ceph-node01
[ceph-node01][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-node01.asok mon_status
[ceph-node01][DEBUG ] ********************************************************************************
[ceph-node01][DEBUG ] status for monitor: mon.ceph-node01
[ceph-node01][DEBUG ] {
[ceph-node01][DEBUG ]   "election_epoch": 3,
[ceph-node01][DEBUG ]   "extra_probe_peers": [],
[ceph-node01][DEBUG ]   "feature_map": {
[ceph-node01][DEBUG ]     "mon": [
[ceph-node01][DEBUG ]       {
[ceph-node01][DEBUG ]         "features": "0x3f01cfbdfffdffff",
[ceph-node01][DEBUG ]         "num": 1,
[ceph-node01][DEBUG ]         "release": "luminous"
[ceph-node01][DEBUG ]       }
[ceph-node01][DEBUG ]     ]
[ceph-node01][DEBUG ]   },
[ceph-node01][DEBUG ]   "features": {
[ceph-node01][DEBUG ]     "quorum_con": "4540138314316775423",
[ceph-node01][DEBUG ]     "quorum_mon": [
[ceph-node01][DEBUG ]       "kraken",
[ceph-node01][DEBUG ]       "luminous",
[ceph-node01][DEBUG ]       "mimic",
[ceph-node01][DEBUG ]       "osdmap-prune",
[ceph-node01][DEBUG ]       "nautilus",
[ceph-node01][DEBUG ]       "octopus",
[ceph-node01][DEBUG ]       "pacific",
[ceph-node01][DEBUG ]       "elector-pinging"
[ceph-node01][DEBUG ]     ],
[ceph-node01][DEBUG ]     "required_con": "2449958747317026820",
[ceph-node01][DEBUG ]     "required_mon": [
[ceph-node01][DEBUG ]       "kraken",
[ceph-node01][DEBUG ]       "luminous",
[ceph-node01][DEBUG ]       "mimic",
[ceph-node01][DEBUG ]       "osdmap-prune",
[ceph-node01][DEBUG ]       "nautilus",
[ceph-node01][DEBUG ]       "octopus",
[ceph-node01][DEBUG ]       "pacific",
[ceph-node01][DEBUG ]       "elector-pinging"
[ceph-node01][DEBUG ]     ]
[ceph-node01][DEBUG ]   },
[ceph-node01][DEBUG ]   "monmap": {
[ceph-node01][DEBUG ]     "created": "2024-10-08T10:12:42.715558Z",
[ceph-node01][DEBUG ]     "disallowed_leaders: ": "",
[ceph-node01][DEBUG ]     "election_strategy": 1,
[ceph-node01][DEBUG ]     "epoch": 1,
[ceph-node01][DEBUG ]     "features": {
[ceph-node01][DEBUG ]       "optional": [],
[ceph-node01][DEBUG ]       "persistent": [
[ceph-node01][DEBUG ]         "kraken",
[ceph-node01][DEBUG ]         "luminous",
[ceph-node01][DEBUG ]         "mimic",
[ceph-node01][DEBUG ]         "osdmap-prune",
[ceph-node01][DEBUG ]         "nautilus",
[ceph-node01][DEBUG ]         "octopus",
[ceph-node01][DEBUG ]         "pacific",
[ceph-node01][DEBUG ]         "elector-pinging"
[ceph-node01][DEBUG ]       ]
[ceph-node01][DEBUG ]     },
[ceph-node01][DEBUG ]     "fsid": "5a6fdfb7-81a1-40f6-97b7-c92f96de9ac5",
[ceph-node01][DEBUG ]     "min_mon_release": 16,
[ceph-node01][DEBUG ]     "min_mon_release_name": "pacific",
[ceph-node01][DEBUG ]     "modified": "2024-10-08T10:12:42.715558Z",
[ceph-node01][DEBUG ]     "mons": [
[ceph-node01][DEBUG ]       {
[ceph-node01][DEBUG ]         "addr": "10.17.3.14:6789/0",
[ceph-node01][DEBUG ]         "crush_location": "{}",
[ceph-node01][DEBUG ]         "name": "ceph-node01",
[ceph-node01][DEBUG ]         "priority": 0,
[ceph-node01][DEBUG ]         "public_addr": "10.17.3.14:6789/0",
[ceph-node01][DEBUG ]         "public_addrs": {
[ceph-node01][DEBUG ]           "addrvec": [
[ceph-node01][DEBUG ]             {
[ceph-node01][DEBUG ]               "addr": "10.17.3.14:3300",
[ceph-node01][DEBUG ]               "nonce": 0,
[ceph-node01][DEBUG ]               "type": "v2"
[ceph-node01][DEBUG ]             },
[ceph-node01][DEBUG ]             {
[ceph-node01][DEBUG ]               "addr": "10.17.3.14:6789",
[ceph-node01][DEBUG ]               "nonce": 0,
[ceph-node01][DEBUG ]               "type": "v1"
[ceph-node01][DEBUG ]             }
[ceph-node01][DEBUG ]           ]
[ceph-node01][DEBUG ]         },
[ceph-node01][DEBUG ]         "rank": 0,
[ceph-node01][DEBUG ]         "weight": 0
[ceph-node01][DEBUG ]       }
[ceph-node01][DEBUG ]     ],
[ceph-node01][DEBUG ]     "removed_ranks: ": "",
[ceph-node01][DEBUG ]     "stretch_mode": false,
[ceph-node01][DEBUG ]     "tiebreaker_mon": ""
[ceph-node01][DEBUG ]   },
[ceph-node01][DEBUG ]   "name": "ceph-node01",
[ceph-node01][DEBUG ]   "outside_quorum": [],
[ceph-node01][DEBUG ]   "quorum": [
[ceph-node01][DEBUG ]     0
[ceph-node01][DEBUG ]   ],
[ceph-node01][DEBUG ]   "quorum_age": 77,
[ceph-node01][DEBUG ]   "rank": 0,
[ceph-node01][DEBUG ]   "state": "leader",
[ceph-node01][DEBUG ]   "stretch_mode": false,
[ceph-node01][DEBUG ]   "sync_provider": []
[ceph-node01][DEBUG ] }
[ceph-node01][DEBUG ] ********************************************************************************
[ceph-node01][INFO  ] monitor: mon.ceph-node01 is running
[ceph-node01][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-node01.asok mon_status
[ceph_deploy.mon][INFO  ] processing monitor mon.ceph-node01
[ceph-node01][DEBUG ] connected to host: ceph-node01
[ceph-node01][DEBUG ] detect platform information from remote host
[ceph-node01][DEBUG ] detect machine type
[ceph-node01][DEBUG ] find the location of an executable
[ceph-node01][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-node01.asok mon_status
[ceph_deploy.mon][INFO  ] mon.ceph-node01 monitor has reached quorum!
[ceph_deploy.mon][INFO  ] all initial monitors are running and have formed quorum
[ceph_deploy.mon][INFO  ] Running gatherkeys...
[ceph_deploy.gatherkeys][INFO  ] Storing keys in temp directory /tmp/tmpWWGCyS
[ceph-node01][DEBUG ] connected to host: ceph-node01
[ceph-node01][DEBUG ] detect platform information from remote host
[ceph-node01][DEBUG ] detect machine type
[ceph-node01][DEBUG ] get remote short hostname
[ceph-node01][DEBUG ] fetch remote file
[ceph-node01][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --admin-daemon=/var/run/ceph/ceph-mon.ceph-node01.asok mon_status
[ceph-node01][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph-node01/keyring auth get client.admin
[ceph-node01][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph-node01/keyring auth get client.bootstrap-mds
[ceph-node01][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph-node01/keyring auth get client.bootstrap-mgr
[ceph-node01][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph-node01/keyring auth get client.bootstrap-osd
[ceph-node01][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph-node01/keyring auth get client.bootstrap-rgw
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.client.admin.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-mds.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-mgr.keyring
[ceph_deploy.gatherkeys][INFO  ] keyring 'ceph.mon.keyring' already exists
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-osd.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-rgw.keyring
[ceph_deploy.gatherkeys][INFO  ] Destroy temp directory /tmp/tmpWWGCyS

验证生成的文件

cephadmin@ceph-node01:/etc/ceph-cluster$ ls /etc/ceph/
ceph.conf  ceph-deploy-ceph.log  ceph.mon.keyring  rbdmap  tmpAi40Po  tmpSILILE  tmpwq6jcL
cephadmin@ceph-node01:/etc/ceph-cluster$ ls
ceph.bootstrap-mds.keyring  ceph.bootstrap-mgr.keyring  ceph.bootstrap-osd.keyring  ceph.bootstrap-rgw.keyring  ceph.client.admin.keyring  ceph.conf  ceph-deploy-ceph.log  ceph.mon.keyring

ceph admin密钥分发至各机器

cephadmin@ceph-node01:/etc/ceph-cluster$ sudo ceph-deploy admin ceph-node01 ceph-node02 ceph-node03
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadmin/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/local/bin/ceph-deploy admin ceph-node01 ceph-node02 ceph-node03
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0xffff99fbb0f0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  client                        : ['ceph-node01', 'ceph-node02', 'ceph-node03']
[ceph_deploy.cli][INFO  ]  func                          : <function admin at 0xffff9a0d5c50>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph-node01
[ceph-node01][DEBUG ] connected to host: ceph-node01
[ceph-node01][DEBUG ] detect platform information from remote host
[ceph-node01][DEBUG ] detect machine type
[ceph-node01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph-node02
The authenticity of host 'ceph-node02 (10.17.3.15)' can't be established.
ECDSA key fingerprint is SHA256:G3fJV27edH5tu4HNY0ArPdlNDPO9eaIEQKOdd1MAcdo.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'ceph-node02' (ECDSA) to the list of known hosts.
root@ceph-node02's password:
root@ceph-node02's password:
[ceph-node02][DEBUG ] connected to host: ceph-node02

部署ceph-mgr节点,后续将node02和node03都添加进来

cephadmin@ceph-node01:/etc/ceph-cluster$ sudo ceph-deploy mgr create ceph-node01
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadmin/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/local/bin/ceph-deploy mgr create ceph-node01
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  mgr                           : [('ceph-node01', 'ceph-node01')]
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0xffff9f0271e0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function mgr at 0xffff9f11b350>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.mgr][DEBUG ] Deploying mgr, cluster ceph hosts ceph-node01:ceph-node01
[ceph-node01][DEBUG ] connected to host: ceph-node01
[ceph-node01][DEBUG ] detect platform information from remote host
[ceph-node01][DEBUG ] detect machine type
[ceph_deploy.mgr][INFO  ] Distro info: Ubuntu 18.04 bionic
[ceph_deploy.mgr][DEBUG ] remote host will use systemd
[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to ceph-node01
[ceph-node01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-node01][WARNIN] mgr keyring does not exist yet, creating one
[ceph-node01][DEBUG ] create a keyring file
[ceph-node01][DEBUG ] create path recursively if it doesn't exist
[ceph-node01][INFO  ] Running command: ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.ceph-node01 mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-ceph-node01/keyring
[ceph-node01][INFO  ] Running command: systemctl enable ceph-mgr@ceph-node01
[ceph-node01][WARNIN] Created symlink /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@ceph-node01.service → /lib/systemd/system/ceph-mgr@.service.
[ceph-node01][INFO  ] Running command: systemctl start ceph-mgr@ceph-node01
[ceph-node01][INFO  ] Running command: systemctl enable ceph.target

在对应节点验证mgr服务是否正常

cephadmin@ceph-node01:/etc/ceph-cluster$ ps -ef |grep ceph-
root        4243       1  0 17:36 ?        00:00:00 /usr/bin/python2.7 /usr/bin/ceph-crash
ceph       11656       1  0 18:39 ?        00:00:00 /usr/bin/ceph-mon -f --cluster ceph --id ceph-node01 --setuser ceph --setgroup ceph
root       11707    5223  0 18:39 pts/1    00:00:00 tail -f /var/log/ceph/ceph-mon.ceph-node01.log
ceph       12301       1  9 18:45 ?        00:00:05 /usr/bin/ceph-mgr -f --cluster ceph --id ceph-node01 --setuser ceph --setgroup ceph
cephadm+   12529    9641  0 18:46 pts/0    00:00:00 grep --color=auto ceph-

推送管理集群的证书给node01 node02 node03

cephadmin@ceph-node01:/etc/ceph-cluster$ sudo ceph-deploy admin ceph-node01
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadmin/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/local/bin/ceph-deploy admin ceph-node01
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0xffff83ba20f0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  client                        : ['ceph-node01']
[ceph_deploy.cli][INFO  ]  func                          : <function admin at 0xffff83cbcc50>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph-node01
[ceph-node01][DEBUG ] connected to host: ceph-node01
[ceph-node01][DEBUG ] detect platform information from remote host
[ceph-node01][DEBUG ] detect machine type
[ceph-node01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

初始化存储节点,也就是用来存储数据的节点,ceph集群中拥有最多osd的机器

# 所有存储节点都执行
ceph-deploy install --release pacific ceph-node02
ceph-deploy install --release pacific ceph-node03
root@ceph-node01:/etc/ceph-cluster# ceph-deploy install --release pacific ceph-node01
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/local/bin/ceph-deploy install --release pacific ceph-node01
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  testing                       : None
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0xffffa437daf0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  dev_commit                    : None
[ceph_deploy.cli][INFO  ]  install_mds                   : False
[ceph_deploy.cli][INFO  ]  stable                        : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  adjust_repos                  : True
[ceph_deploy.cli][INFO  ]  func                          : <function install at 0xffffa4439c50>
[ceph_deploy.cli][INFO  ]  install_mgr                   : False
[ceph_deploy.cli][INFO  ]  install_all                   : False
[ceph_deploy.cli][INFO  ]  repo                          : False
[ceph_deploy.cli][INFO  ]  host                          : ['ceph-node01']
[ceph_deploy.cli][INFO  ]  install_rgw                   : False
[ceph_deploy.cli][INFO  ]  install_tests                 : False
[ceph_deploy.cli][INFO  ]  repo_url                      : None
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  install_osd                   : False
[ceph_deploy.cli][INFO  ]  version_kind                  : stable
[ceph_deploy.cli][INFO  ]  install_common                : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  dev                           : master
[ceph_deploy.cli][INFO  ]  nogpgcheck                    : False
[ceph_deploy.cli][INFO  ]  local_mirror                  : None
[ceph_deploy.cli][INFO  ]  release                       : pacific
[ceph_deploy.cli][INFO  ]  install_mon                   : False
[ceph_deploy.cli][INFO  ]  gpg_url                       : None
[ceph_deploy.install][DEBUG ] Installing stable version pacific on cluster ceph hosts ceph-node01
[ceph_deploy.install][DEBUG ] Detecting platform for host ceph-node01 ...
[ceph-node01][DEBUG ] connected to host: ceph-node01
[ceph-node01][DEBUG ] detect platform information from remote host
[ceph-node01][DEBUG ] detect machine type
[ceph_deploy.install][INFO  ] Distro info: Ubuntu 18.04 bionic
[ceph-node01][INFO  ] installing Ceph on ceph-node01
[ceph-node01][INFO  ] Running command: env DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get --assume-yes -q update
[ceph-node01][DEBUG ] Hit:1 https://mirrors.ustc.edu.cn/kubernetes/core:/stable:/v1.30/deb  InRelease
[ceph-node01][DEBUG ] Hit:2 http://ports.ubuntu.com/ubuntu-ports bionic InRelease
[ceph-node01][DEBUG ] Hit:3 http://ports.ubuntu.com/ubuntu-ports bionic-updates InRelease
[ceph-node01][DEBUG ] Hit:4 https://download.ceph.com/debian-pacific bionic InRelease
[ceph-node01][DEBUG ] Hit:5 http://ports.ubuntu.com/ubuntu-ports bionic-backports InRelease
[ceph-node01][DEBUG ] Hit:6 http://ports.ubuntu.com/ubuntu-ports bionic-security InRelease
[ceph-node01][DEBUG ] Reading package lists...
[ceph-node01][INFO  ] Running command: env DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get --assume-yes -q --no-install-recommends install ca-certificates apt-transport-https
[ceph-node01][DEBUG ] Reading package lists...
[ceph-node01][DEBUG ] Building dependency tree...
[ceph-node01][DEBUG ] Reading state information...
[ceph-node01][DEBUG ] ca-certificates is already the newest version (20230311ubuntu0.18.04.1).
[ceph-node01][DEBUG ] apt-transport-https is already the newest version (1.6.17).
[ceph-node01][DEBUG ] The following packages were automatically installed and are no longer required:
[ceph-node01][DEBUG ]   formencode-i18n libpython2.7 python-asn1crypto python-bcrypt python-bs4
[ceph-node01][DEBUG ]   python-ceph-argparse python-certifi python-cffi-backend python-chardet
[ceph-node01][DEBUG ]   python-cherrypy3 python-cryptography python-dnspython python-enum34
[ceph-node01][DEBUG ]   python-formencode python-idna python-ipaddress python-jinja2 python-logutils
[ceph-node01][DEBUG ]   python-mako python-markupsafe python-openssl python-paste python-pastedeploy
[ceph-node01][DEBUG ]   python-pecan python-pkg-resources python-prettytable python-rbd
[ceph-node01][DEBUG ]   python-requests python-simplegeneric python-simplejson python-singledispatch
[ceph-node01][DEBUG ]   python-six python-tempita python-urllib3 python-waitress python-webob
[ceph-node01][DEBUG ]   python-webtest python-werkzeug
[ceph-node01][DEBUG ] Use 'apt autoremove' to remove them.
[ceph-node01][DEBUG ] 0 upgraded, 0 newly installed, 0 to remove and 327 not upgraded.
[ceph-node01][INFO  ] Running command: wget -O release.asc https://download.ceph.com/keys/release.asc
[ceph-node01][WARNIN] --2024-10-08 19:02:09--  https://download.ceph.com/keys/release.asc
[ceph-node01][WARNIN] Resolving download.ceph.com (download.ceph.com)... 158.69.68.124, 2607:5300:201:2000::3:58a1
[ceph-node01][WARNIN] Connecting to download.ceph.com (download.ceph.com)|158.69.68.124|:443... connected.
[ceph-node01][WARNIN] HTTP request sent, awaiting response... 200 OK
[ceph-node01][WARNIN] Length: 1645 (1.6K) [application/octet-stream]
[ceph-node01][WARNIN] Saving to: ‘release.asc’
[ceph-node01][WARNIN]
[ceph-node01][WARNIN]      0K .                                                     100%  439M=0s
[ceph-node01][WARNIN]
[ceph-node01][WARNIN] 2024-10-08 19:02:10 (439 MB/s) - ‘release.asc’ saved [1645/1645]

查看节点磁盘并初始化

cephadmin@ceph-node01:/etc/ceph-cluster$ sudo ceph-deploy disk list ceph-node01.xx.local
cephadmin@ceph-node01:/etc/ceph-cluster$ sudo ceph-deploy disk list ceph-node02.xx.local
cephadmin@ceph-node01:/etc/ceph-cluster$ sudo ceph-deploy disk list ceph-node03.xx.local

擦除磁盘数据

cephadmin@ceph-node01:/etc/ceph-cluster$ sudo ceph-deploy disk zap ceph-node01 /dev/vdb
cephadmin@ceph-node01:/etc/ceph-cluster$ sudo ceph-deploy disk zap ceph-node02 /dev/vdb
cephadmin@ceph-node01:/etc/ceph-cluster$ sudo ceph-deploy disk zap ceph-node03 /dev/vdb

输出

ph_deploy.conf][DEBUG ] found configuration file at: /home/cephadmin/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/local/bin/ceph-deploy disk zap ceph-node01 /dev/vdb
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : zap
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0xffff93fe9f50>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  host                          : ceph-node01
[ceph_deploy.cli][INFO  ]  func                          : <function disk at 0xffff940514d0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  disk                          : ['/dev/vdb']
[ceph_deploy.osd][DEBUG ] zapping /dev/vdb on ceph-node01
[ceph-node01][DEBUG ] connected to host: ceph-node01
[ceph-node01][DEBUG ] detect platform information from remote host
[ceph-node01][DEBUG ] detect machine type
[ceph-node01][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 18.04 bionic
[ceph-node01][DEBUG ] zeroing last few blocks of device
[ceph-node01][DEBUG ] find the location of an executable
[ceph-node01][INFO  ] Running command: /usr/sbin/ceph-volume lvm zap /dev/vdb
[ceph-node01][WARNIN] --> Zapping: /dev/vdb
[ceph-node01][WARNIN] --> --destroy was not specified, but zapping a whole device will remove the partition table
[ceph-node01][WARNIN] Running command: /bin/dd if=/dev/zero of=/dev/vdb bs=1M count=10 conv=fsync
[ceph-node01][WARNIN]  stderr: 10+0 records in
[ceph-node01][WARNIN] 10+0 records out
[ceph-node01][WARNIN]  stderr: 10485760 bytes (10 MB, 10 MiB) copied, 0.0246339 s, 426 MB/s
[ceph-node01][WARNIN] --> Zapping successful for: <Raw Device: /dev/vdb>

添加osd,数据data 元数据block wal日志block-wal都放在一起

cephadmin@ceph-node01:/etc/ceph-cluster$ sudo ceph-deploy osd create ceph-node01.xx.local --data /dev/vdb
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadmin/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/local/bin/ceph-deploy osd create ceph-node01.xx.local --data /dev/vdb
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  bluestore                     : None
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0xffff811c2aa0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  fs_type                       : xfs
[ceph_deploy.cli][INFO  ]  block_wal                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  journal                       : None
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  host                          : ceph-node01.xx.local
[ceph_deploy.cli][INFO  ]  filestore                     : None
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0xffff81225450>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  zap_disk                      : False
[ceph_deploy.cli][INFO  ]  data                          : /dev/vdb
[ceph_deploy.cli][INFO  ]  block_db                      : None
[ceph_deploy.cli][INFO  ]  dmcrypt                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data device /dev/vdb
[ceph-node01.xx.local][DEBUG ] connected to host: ceph-node01.xx.local
[ceph-node01.xx.local][DEBUG ] detect platform information from remote host
[ceph-node01.xx.local][DEBUG ] detect machine type
[ceph-node01.xx.local][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 18.04 bionic
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph-node01.xx.local
[ceph-node01.xx.local][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-node01.xx.local][WARNIN] osd keyring does not exist yet, creating one
[ceph-node01.xx.local][DEBUG ] create a keyring file
[ceph-node01.xx.local][DEBUG ] find the location of an executable
[ceph-node01.xx.local][INFO  ] Running command: /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/vdb
[ceph-node01.xx.local][WARNIN] Running command: /usr/bin/ceph-authtool --gen-print-key
[ceph-node01.xx.local][WARNIN] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 66fd9200-a35e-4a36-85a2-a512b09826de
[ceph-node01.xx.local][WARNIN] Running command: vgcreate --force --yes ceph-cf974d8c-8a5a-47ae-beb6-c2d6902df8f9 /dev/vdb
[ceph-node01.xx.local][WARNIN]  stdout: Physical volume "/dev/vdb" successfully created.
[ceph-node01.xx.local][WARNIN]  stdout: Volume group "ceph-cf974d8c-8a5a-47ae-beb6-c2d6902df8f9" successfully created
[ceph-node01.xx.local][WARNIN] Running command: lvcreate --yes -l 262143 -n osd-block-66fd9200-a35e-4a36-85a2-a512b09826de ceph-cf974d8c-8a5a-47ae-beb6-c2d6902df8f9
[ceph-node01.xx.local][WARNIN]  stdout: Logical volume "osd-block-66fd9200-a35e-4a36-85a2-a512b09826de" created.
[ceph-node01.xx.local][WARNIN] Running command: /usr/bin/ceph-authtool --gen-print-key
[ceph-node01.xx.local][WARNIN] Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
[ceph-node01.xx.local][WARNIN] --> Executable selinuxenabled not in PATH: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin
[ceph-node01.xx.local][WARNIN] Running command: /bin/chown -h ceph:ceph /dev/ceph-cf974d8c-8a5a-47ae-beb6-c2d6902df8f9/osd-block-66fd9200-a35e-4a36-85a2-a512b09826de
[ceph-node01.xx.local][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-0
[ceph-node01.xx.local][WARNIN] Running command: /bin/ln -s /dev/ceph-cf974d8c-8a5a-47ae-beb6-c2d6902df8f9/osd-block-66fd9200-a35e-4a36-85a2-a512b09826de /var/lib/ceph/osd/ceph-0/block
[ceph-node01.xx.local][WARNIN] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
[ceph-node01.xx.local][WARNIN]  stderr: 2024-10-08T19:14:26.763+0800 ffff8e3ea1f0 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
[ceph-node01.xx.local][WARNIN] 2024-10-08T19:14:26.763+0800 ffff8e3ea1f0 -1 AuthRegistry(0xffff8805c4d0) no keyring found at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
[ceph-node01.xx.local][WARNIN]  stderr: got monmap epoch 1
[ceph-node01.xx.local][WARNIN] --> Creating keyring file for osd.0
[ceph-node01.xx.local][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
[ceph-node01.xx.local][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
[ceph-node01.xx.local][WARNIN] Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 66fd9200-a35e-4a36-85a2-a512b09826de --setuser ceph --setgroup ceph
[ceph-node01.xx.local][WARNIN]  stderr: 2024-10-08T19:14:27.315+0800 ffffb41ab010 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
[ceph-node01.xx.local][WARNIN] --> ceph-volume lvm prepare successful for: /dev/vdb
[ceph-node01.xx.local][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
[ceph-node01.xx.local][WARNIN] Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-cf974d8c-8a5a-47ae-beb6-c2d6902df8f9/osd-block-66fd9200-a35e-4a36-85a2-a512b09826de --path /var/lib/ceph/osd/ceph-0 --no-mon-config
[ceph-node01.xx.local][WARNIN] Running command: /bin/ln -snf /dev/ceph-cf974d8c-8a5a-47ae-beb6-c2d6902df8f9/osd-block-66fd9200-a35e-4a36-85a2-a512b09826de /var/lib/ceph/osd/ceph-0/block
[ceph-node01.xx.local][WARNIN] Running command: /bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
[ceph-node01.xx.local][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-0
[ceph-node01.xx.local][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
[ceph-node01.xx.local][WARNIN] Running command: /bin/systemctl enable ceph-volume@lvm-0-66fd9200-a35e-4a36-85a2-a512b09826de
[ceph-node01.xx.local][WARNIN]  stderr: Created symlink /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-0-66fd9200-a35e-4a36-85a2-a512b09826de.service → /lib/systemd/system/ceph-volume@.service.
[ceph-node01.xx.local][WARNIN] Running command: /bin/systemctl enable --runtime ceph-osd@0
[ceph-node01.xx.local][WARNIN]  stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph-osd@0.service → /lib/systemd/system/ceph-osd@.service.
[ceph-node01.xx.local][WARNIN] Running command: /bin/systemctl start ceph-osd@0
[ceph-node01.xx.local][WARNIN] --> ceph-volume lvm activate successful for osd ID: 0
[ceph-node01.xx.local][WARNIN] --> ceph-volume lvm create successful for: /dev/vdb
[ceph-node01.xx.local][INFO  ] checking OSD status...
[ceph-node01.xx.local][DEBUG ] find the location of an executable
[ceph-node01.xx.local][INFO  ] Running command: /usr/bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host ceph-node01.xx.local is now ready for osd use.

验证

root@ceph-node01:/etc/ceph-cluster# ceph -scluster:id:     5a6fdfb7-81a1-40f6-97b7-c92f96de9ac5health: HEALTH_WARNmon is allowing insecure global_id reclaimOSD count 2 < osd_pool_default_size 3services:mon: 1 daemons, quorum ceph-node01 (age 37m)mgr: ceph-node01(active, since 31m)osd: 2 osds: 2 up (since 21s), 2 in (since 30s)data:pools:   0 pools, 0 pgsobjects: 0 objects, 0 Busage:   580 MiB used, 2.0 TiB / 2.0 TiB availpgs:

mon监控节点状态查看

root@ceph-node01:~# ceph quorum_status --format json-pretty{"election_epoch": 20,"quorum": [0,1,2],"quorum_names": ["ceph-node01","ceph-node02","ceph-node03"],"quorum_leader_name": "ceph-node01","quorum_age": 77,"features": {"quorum_con": "4540138314316775423","quorum_mon": ["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging"]},"monmap": {"epoch": 3,"fsid": "5a6fdfb7-81a1-40f6-97b7-c92f96de9ac5","modified": "2024-10-08T11:29:51.477381Z","created": "2024-10-08T10:12:42.715558Z","min_mon_release": 16,"min_mon_release_name": "pacific","election_strategy": 1,"disallowed_leaders: ": "","stretch_mode": false,"tiebreaker_mon": "","removed_ranks: ": "","features": {"persistent": ["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging"],"optional": []},"mons": [{"rank": 0,"name": "ceph-node01","public_addrs": {"addrvec": [{"type": "v2","addr": "10.17.3.14:3300","nonce": 0},{"type": "v1","addr": "10.17.3.14:6789","nonce": 0}]},"addr": "10.17.3.14:6789/0","public_addr": "10.17.3.14:6789/0","priority": 0,"weight": 0,"crush_location": "{}"},{"rank": 1,"name": "ceph-node02","public_addrs": {"addrvec": [{"type": "v2","addr": "10.17.3.15:3300","nonce": 0},{"type": "v1","addr": "10.17.3.15:6789","nonce": 0}]},"addr": "10.17.3.15:6789/0","public_addr": "10.17.3.15:6789/0","priority": 0,"weight": 0,"crush_location": "{}"},{"rank": 2,"name": "ceph-node03","public_addrs": {"addrvec": [{"type": "v2","addr": "10.17.3.16:3300","nonce": 0},{"type": "v1","addr": "10.17.3.16:6789","nonce": 0}]},"addr": "10.17.3.16:6789/0","public_addr": "10.17.3.16:6789/0","priority": 0,"weight": 0,"crush_location": "{}"}]}
}

cephdashboard_913">部署ceph-dashboard

# 是否安装dashboard module
root@ceph-node01:~# dpkg -l |grep ceph-mgr
ii  ceph-mgr                              16.2.15-1bionic                    arm64        manager for the ceph distributed storage system
ii  ceph-mgr-modules-core                 16.2.15-1bionic                    all          ceph manager modules which are always enabled
root@ceph-node01:~# apt install ceph-mgr-dashboard
# 查看当前安装的模块状态以及可启用的模块
root@ceph-node01:~# ceph mgr module ls > ceph-mgr-module.json
# 启用ceph-dashboard模块
root@ceph-node01:~# ceph mgr module enable dashboard
# 禁用ssl
root@ceph-node01:~# ceph config set mgr mgr/dashboard/ssl false
# 配置监听IP
root@ceph-node01:~# ceph config set mgr mgr/dashboard/ceph-node01/server_addr 10.17.3.14
# 配置监听端口
root@ceph-node01:~# ceph config set mgr mgr/dashboard/ceph-node01/server_port 9009
# 验证端口是否监听,不监听则重启mgr服务
root@ceph-node01:~# systemctl restart ceph-mgr@ceph-node01.service
root@ceph-node01:~# systemctl status ceph-mgr@ceph-node01.service

安全组放行9009端口,浏览器测试
在这里插入图片描述

设置ceph-dashboard密码

root@ceph-node01:/etc/ceph-cluster# echo "cephdashboard" > ceph-dashboard-passwd.txt
root@ceph-node01:/etc/ceph-cluster# cat ceph-dashboard-passwd.txt
cephdashboard
root@ceph-node01:/etc/ceph-cluster# ceph dashboard set-login-credentials ceph -i ceph-dashboard-passwd.txt
******************************************************************
***          WARNING: this command is deprecated.              ***
*** Please use the ac-user-* related commands to manage users. ***
******************************************************************
Username and password updated

配置radosgw对象存储网关

apt-cache madison radosgw
cd /etc/ceph-cluster/cephadmin@ceph-node01:/etc/ceph-cluster$ sudo ceph-deploy --overwrite-conf rgw create ceph-node01
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadmin/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/local/bin/ceph-deploy --overwrite-conf rgw create ceph-node01
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  rgw                           : [('ceph-node01', 'rgw.ceph-node01')]
[ceph_deploy.cli][INFO  ]  overwrite_conf                : True
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0xffff890a19b0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function rgw at 0xffff89142950>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.rgw][DEBUG ] Deploying rgw, cluster ceph hosts ceph-node01:rgw.ceph-node01
[ceph-node01][DEBUG ] connected to host: ceph-node01
[ceph-node01][DEBUG ] detect platform information from remote host
[ceph-node01][DEBUG ] detect machine type
[ceph_deploy.rgw][INFO  ] Distro info: Ubuntu 18.04 bionic
[ceph_deploy.rgw][DEBUG ] remote host will use systemd
[ceph_deploy.rgw][DEBUG ] deploying rgw bootstrap to ceph-node01
[ceph-node01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-node01][WARNIN] rgw keyring does not exist yet, creating one
[ceph-node01][DEBUG ] create a keyring file
[ceph-node01][DEBUG ] create path recursively if it doesn't exist
[ceph-node01][INFO  ] Running command: ceph --cluster ceph --name client.bootstrap-rgw --keyring /var/lib/ceph/bootstrap-rgw/ceph.keyring auth get-or-create client.rgw.ceph-node01 osd allow rwx mon allow rw -o /var/lib/ceph/radosgw/ceph-rgw.ceph-node01/keyring
[ceph-node01][INFO  ] Running command: systemctl enable ceph-radosgw@rgw.ceph-node01
[ceph-node01][WARNIN] Created symlink /etc/systemd/system/ceph-radosgw.target.wants/ceph-radosgw@rgw.ceph-node01.service → /lib/systemd/system/ceph-radosgw@.service.
[ceph-node01][INFO  ] Running command: systemctl start ceph-radosgw@rgw.ceph-node01
[ceph-node01][INFO  ] Running command: systemctl enable ceph.target
[ceph_deploy.rgw][INFO  ] The Ceph Object Gateway (RGW) is now running on host ceph-node01 and default port 7480

验证服务

cephadmin@ceph-node01:/etc/ceph-cluster$ systemctl status ceph-radosgw@rgw.ceph-node01.service
● ceph-radosgw@rgw.ceph-node01.service - Ceph rados gatewayLoaded: loaded (/lib/systemd/system/ceph-radosgw@.service; indirect; vendor preset: enabled)Active: active (running) since Wed 2024-10-09 10:31:20 CST; 57s agoMain PID: 20282 (radosgw)Tasks: 602CGroup: /system.slice/system-ceph\x2dradosgw.slice/ceph-radosgw@rgw.ceph-node01.service└─20282 /usr/bin/radosgw -f --cluster ceph --name client.rgw.ceph-node01 --setuser ceph --setgroup ceph
cephadmin@ceph-node01:/etc/ceph-cluster$
cephadmin@ceph-node01:/etc/ceph-cluster$ ps -ef |grep radosgw
ceph       20282       1  0 10:31 ?        00:00:00 /usr/bin/radosgw -f --cluster ceph --name client.rgw.ceph-node01 --setuser ceph --setgroup ceph
cephadm+   21020   20167  0 10:32 pts/0    00:00:00 grep --color=auto radosgw

验证rgw客户端

cephadmin@ceph-node01:/etc/ceph-cluster$ curl http://10.17.3.14:7480/
<?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult>cephadmin@ceph-node01:/etc/ceph-cluster$
cephadmin@ceph-node01:/etc/ceph-cluster$
cephadmin@ceph-node01:/etc/ceph-cluster$
cephadmin@ceph-node01:/etc/ceph-cluster$ sudo ceph -scluster:id:     5a6fdfb7-81a1-40f6-97b7-c92f96de9ac5health: HEALTH_WARNmons are allowing insecure global_id reclaimservices:mon: 3 daemons, quorum ceph-node01,ceph-node02,ceph-node03 (age 39m)mgr: ceph-node01(active, since 15h)osd: 3 osds: 3 up (since 15h), 3 in (since 15h)rgw: 1 daemon active (1 hosts, 1 zones)data:pools:   5 pools, 129 pgsobjects: 195 objects, 4.9 KiBusage:   872 MiB used, 3.0 TiB / 3.0 TiB availpgs:     129 active+clean

客户端s3 browser

下载地址

https://s3browser.com/

在这里插入图片描述
在这里插入图片描述

reference


http://www.ppmy.cn/devtools/126718.html

相关文章

深度学习-27-基于keras的深度学习建模预测全流程

文章目录 1 深度学习简介1.1 定义和应用场景1.2 基本原理1.2.1 神经网络1.2.2 激活函数1.2.3 损失函数1.2.4 优化算法1.3 深度学习建模预测全流程1.3.1 数据预处理1.3.2 模型构建1.3.3 训练模型1.3.4 模型评估1.3.5 模型优化1.3.6 模型部署2 手写数字识别应用2.1 加载数据2.2 数…

Qt通过QProcess调用第三方进程

我们在运行Qt程序时&#xff0c;有时需要调用第三方程序&#xff0c;这时就可以用QProcess类。具体如下&#xff1a; 一. 启用进程之前 我们需要在头文件中定义一个私有变量指针&#xff0c;为了使他能长时间运行&#xff0c;定义为指针。 #include <QProcess> class …

Java 项目如何连接并使用 SFTP 服务的示例详解

文章目录 1、SFTP介绍2、pom依赖3、SFTPUtil4、测试5、测试结果 1、SFTP介绍 SFTP&#xff08;Secure File Transfer Protocol&#xff09;是一种安全的文件传输协议&#xff0c;是SSH&#xff08;Secure Shell&#xff09;协议的一个子协议&#xff0c;设计用于加密和保护文件…

【Java知识】Java进阶-线程池深度解读

文章目录 线程池概述线程池的核心组件和概念&#xff1a;线程池的工作原理&#xff1a;线程池的创建&#xff1a;线程池的关闭&#xff1a;总结&#xff1a; 线程池种类以及用途以ScheduledThreadPool为例的类继承关系 拒绝策略有哪些&#xff1f;使用范例固定大小的线程池单线…

SAP SD学习笔记- 豆知识 - SAP中的英文 - SD中英文,日语,中文

SD的中部分中日英文对照。先收集&#xff0c;等以后再整理。 1&#xff0c;販売管理&#xff08;销售管理&#xff09; 日本語英語中国語受注伝票sales order销售订单出荷伝票delivery order交货订单ピッキングリストpicking list领货清单シップメント伝票shipment document发…

十一、结构型(享元模式)

享元模式&#xff08;Flyweight Pattern&#xff09; 概念 享元模式&#xff08;Flyweight Pattern&#xff09;是一种结构型设计模式&#xff0c;它通过共享大量细粒度对象来减少内存占用。享元模式将内外部状态分离&#xff0c;内部状态可以共享&#xff0c;外部状态由客户端…

数据字典是什么?和数据库、数据仓库有什么关系?

一、数据字典的定义及作用 数据字典是一种对数据的定义和描述的集合&#xff0c;它包含了数据的名称、类型、长度、取值范围、业务含义、数据来源等详细信息。 数据字典的主要作用如下&#xff1a; 1. 对于数据开发者来说&#xff0c;数据字典包含了关于数据结构和内容的清晰…

FreeRTOS - 队列

在学习FreeRTOS过程中&#xff0c;结合韦东山-FreeRTOS手册和视频、野火-FreeRTOS内核实现与应用开发、及网上查找的其他资源&#xff0c;整理了该篇文章。如有内容理解不正确之处&#xff0c;欢迎大家指出&#xff0c;共同进步。 1. 队列 1.1 队列基本概念 队列(queue)可以用…