我原来CEPH集群的OSD是默认方式创建的,即“ceph-deploy osd create ceph-node-1 --data /dev/bcache0”,因为性能遇到瓶颈,为了充分榨干nvme aic ssd,决定重建osd把WAL/DB分区放到nvme aic ssd上。目前社区和官方说只有重建方式,ceph允许同时存在2种不同的osd,所以可以平滑的过度。

我有4个节点,所以我每次重建4个osd,重建osd数量应保持在总数20%左右,不应一次性操作太多。

首先将osd reweight为0,这会立即触发osd的数据迁移

ceph osd reweight 0 0
ceph osd reweight 1 0
ceph osd reweight 2 0
ceph osd reweight 3 0

将对应的进程停止,集群开始进行数据均衡

systemctl stop ceph-osd@0.service
systemctl stop ceph-osd@1.service
systemctl stop ceph-osd@2.service
systemctl stop ceph-osd@3.service

osd仍在集群但out,osd总数数不变,接下来destroy这个osd

ceph osd destroy 0 --yes-i-really-mean-it
ceph osd destroy 1 --yes-i-really-mean-it
ceph osd destroy 2 --yes-i-really-mean-it
ceph osd destroy 3 --yes-i-really-mean-it

解绑BCACHE

这里我不确定是否需要进行解绑,只是感觉上需要;有条件的可以测试下直接stop停用看看能不能正常注销缓存盘

bcache-super-show /dev/nvme0n1 // 此处是为获取bcache缓存盘cset.uuid
echo d48b557b-0484-483a-93ab-1c02ce1fdeb0 > /sys/block/bcache0/bcache/detach
echo d48b557b-0484-483a-93ab-1c02ce1fdeb0 > /sys/block/bcache1/bcache/detach
echo d48b557b-0484-483a-93ab-1c02ce1fdeb0 > /sys/block/bcache2/bcache/detach
echo d48b557b-0484-483a-93ab-1c02ce1fdeb0 > /sys/block/bcache3/bcache/detach

观察脏数据全部回写为0再下一步操作

cat /sys/block/bcache0/bcache/dirty_data
cat /sys/block/bcache1/bcache/dirty_data
cat /sys/block/bcache2/bcache/dirty_data
cat /sys/block/bcache3/bcache/dirty_data

停用后端设备(数据盘)

echo 1 > /sys/block/bcache0/bcache/stop
echo 1 > /sys/block/bcache1/bcache/stop
echo 1 > /sys/block/bcache2/bcache/stop
echo 1 > /sys/block/bcache3/bcache/stop

注销缓存盘

echo 1  > /sys/fs/bcache/d48b557b-0484-483a-93ab-1c02ce1fdeb0/unregister

格式化/擦除缓存盘

mkfs.ext4  /dev/nvme0n1
wipefs -a /dev/nvme0n1

删除ceph osd vg/lvm分区

# pvs
  PV           VG                                        Fmt  Attr PSize   PFree
  /dev/bcache0 ceph-637d0345-995a-4491-a8c2-33405942a94d lvm2 a--   <3.64t    0 
  /dev/bcache1 ceph-69474643-05b4-4bb9-ae69-51b90858c1b8 lvm2 a--   <3.64t    0 
  /dev/bcache2 ceph-e17bef53-5f39-4045-a973-bf7d2c5ab75f lvm2 a--   <3.64t    0 
  /dev/bcache3 ceph-61c39e07-a465-43c6-b9a6-b788d016d613 lvm2 a--   <3.64t    0 
  /dev/sda2    centos                                    lvm2 a--  231.87g    0 

# vgremove ceph-637d0345-995a-4491-a8c2-33405942a94d
# vgremove ceph-69474643-05b4-4bb9-ae69-51b90858c1b8
# vgremove ceph-e17bef53-5f39-4045-a973-bf7d2c5ab75f 
# vgremove ceph-61c39e07-a465-43c6-b9a6-b788d016d613

擦除hdd

wipefs -a /dev/sdb
wipefs -a /dev/sdc
wipefs -a /dev/sdd
wipefs -a /dev/sde

创建分区

bash partition.sh
partition.sh脚本内容:
#/bin/bash
parted -s /dev/nvme0n1 mklabel gpt
start=0
# 划分为5个35GB分区
end=`expr $start + 35`
parted /dev/nvme0n1 mkpart primary 2048s ${end}GiB
start=$end
for i in {1..4}
do
    end=`expr $start + 35`
    parted /dev/nvme0n1 mkpart primary ${start}GiB ${end}GiB
    start=$end
done
# 划分为5个4GB分区
for i in {1..5}
do
    end=`expr $start + 4`
    parted /dev/nvme0n1 mkpart primary ${start}GiB ${end}GiB
    start=$end
done
# 划分为5个50GB分区
for i in {1..5}
do
    end=`expr $start + 50`
    parted /dev/nvme0n1 mkpart primary ${start}GiB ${end}GiB
    start=$end
done

# Nvme盘容量480GB*0.93换算后容量: 447GB
# DB分区: 35GB*5 = 175GB
# WAL分区: 4*5 = 20GB
# DB+WAL 小计: 195GB, 剩余252GB
# Bcache加速盘分区: 50GB*5 = 250GB, 剩余2GB

创建分区

bash create_bcache.sh
create_bcache.sh脚本内容:
#!/bin/bash 
n=11
for disk in {b..e} 
do 
        make-bcache -B /dev/sd${disk} -C /dev/nvme0n1p${n} 
        ((n = $(( $n + 1 )))) 
done 

zap处理osd磁盘

ceph-volume lvm zap /dev/bcache0
ceph-volume lvm zap /dev/bcache1
ceph-volume lvm zap /dev/bcache2
ceph-volume lvm zap /dev/bcache3

基于原来osd id重建OSD

ceph-volume lvm prepare --osd-id 0 --data /dev/bcache0 --block.wal /dev/nvme0n1p6 --block.db /dev/nvme0n1p1
ceph-volume lvm prepare --osd-id 1 --data /dev/bcache1 --block.wal /dev/nvme0n1p7 --block.db /dev/nvme0n1p2 
ceph-volume lvm prepare --osd-id 2 --data /dev/bcache2 --block.wal /dev/nvme0n1p8 --block.db /dev/nvme0n1p3
ceph-volume lvm prepare --osd-id 3 --data /dev/bcache3 --block.wal /dev/nvme0n1p9 --block.db /dev/nvme0n1p4

osd权重设置为重建前的权重

ceph osd reweight 0 1
ceph osd reweight 1 1
ceph osd reweight 2 1
ceph osd reweight 3 1

start这个osd, 集群开始进行数据均衡

systemctl start ceph-osd@0.service
systemctl start ceph-osd@1.service
systemctl start ceph-osd@2.service
systemctl start ceph-osd@3.service

其实可以大部分可以脚本一键完成,我分解了一下,更好记录。

本文参考学习、拷贝下列文章:
https://docs.ceph.com/en/octopus/rados/operations/add-or-rm-osds/#replacing-an-osd
http://strugglesquirrel.com/2018/11/20/luminous-12-2-8%E9%87%8D%E5%BB%BAosd%E7%9A%84%E6%AD%A3%E7%A1%AE%E5%A7%BF%E5%8A%BF/
https://support.huaweicloud.com/fg-kunpengsdss/kunpengswc_20_0010.html

标签: Ceph, Bcache

添加新评论