如果CEPH已经使用Bcache缓存加速了OSD,使用未加速的OSD也需要应用Bcache缓存加速,则可以参考下文连贯操作记录,如有出错请留言。

查看当前分区

[root@ceph-node-1 ceph-admin-node]# lsblk
NAME                                                                                                    MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
nvme0n1                                                                                                 259:0    0 447.1G  0 disk 
├─bcache0                                                                                               252:0    0   3.7T  0 disk 
│ └─ceph--60bcd20c--264c--489b--8315--6b3fe7d9753b-osd--block--40eedca5--3120--416a--a021--b1dceaa15815 253:1    0   3.7T  0 lvm  
└─bcache1                                                                                               252:128  0   3.7T  0 disk 
  └─ceph--f67b425f--a786--42db--8ee7--d0dad6fb2ba4-osd--block--949401de--81d5--477c--8f22--b8c01e671d58 253:2    0   3.7T  0 lvm  
sdd                                                                                                       8:48   0   3.7T  0 disk 
sdb                                                                                                       8:16   0   3.7T  0 disk 
└─bcache0                                                                                               252:0    0   3.7T  0 disk 
  └─ceph--60bcd20c--264c--489b--8315--6b3fe7d9753b-osd--block--40eedca5--3120--416a--a021--b1dceaa15815 253:1    0   3.7T  0 lvm  
sde                                                                                                       8:64   0   3.7T  0 disk 
sdc                                                                                                       8:32   0   3.7T  0 disk 
└─bcache1                                                                                               252:128  0   3.7T  0 disk 
  └─ceph--f67b425f--a786--42db--8ee7--d0dad6fb2ba4-osd--block--949401de--81d5--477c--8f22--b8c01e671d58 253:2    0   3.7T  0 lvm  
sda                                                                                                       8:0    0 232.4G  0 disk 
├─sda2                                                                                                    8:2    0 231.9G  0 part 
│ └─centos-root                                                                                         253:0    0 231.9G  0 lvm  /
└─sda1                                                                                                    8:1    0   512M  0 part /boot

查询cset-uuid

[root@ceph-node-1 ceph-admin-node]# bcache-super-show /dev/nvme0n1
sb.magic                ok
sb.first_sector         8 [match]
sb.csum                 A6E9A08D651F8467 [match]
sb.version              3 [cache device]

dev.label               (empty)
dev.uuid                303e696e-6b30-4f06-a75d-294e3d197434
dev.sectors_per_block   1
dev.sectors_per_bucket  1024
dev.cache.first_sector  1024
dev.cache.cache_sectors 937701376
dev.cache.total_sectors 937702400
dev.cache.ordered       yes
dev.cache.discard       no
dev.cache.pos           0
dev.cache.replacement   0 [lru]

cset.uuid               e5291f51-32ae-4d4e-95ce-0e646d41dfc4

查询CEPH OSD

[root@ceph-node-1 ceph-admin-node]# ceph osd tree
ID  CLASS  WEIGHT    TYPE NAME             STATUS  REWEIGHT  PRI-AFF
-1         50.93478  root default                                   
-3          7.27640      host ceph-node-1                           
 0    ssd   3.63820          osd.0             up   1.00000  1.00000
 1    ssd   3.63820          osd.1             up   1.00000  1.00000
-5         14.55280      host ceph-node-2                           
 2    ssd   3.63820          osd.2             up   1.00000  1.00000
 3    ssd   3.63820          osd.3             up   1.00000  1.00000
12    ssd   3.63820          osd.12            up   1.00000  1.00000
13    ssd   3.63820          osd.13            up   1.00000  1.00000
-7         14.55280      host ceph-node-3                           
 4    ssd   3.63820          osd.4             up   1.00000  1.00000
 5    ssd   3.63820          osd.5             up   1.00000  1.00000
10    ssd   3.63820          osd.10            up   1.00000  1.00000
11    ssd   3.63820          osd.11            up   1.00000  1.00000
-9         14.55280      host ceph-node-4                           
 6    ssd   3.63820          osd.6             up   1.00000  1.00000
 7    ssd   3.63820          osd.7             up   1.00000  1.00000
 8    ssd   3.63820          osd.8             up   1.00000  1.00000
 9    ssd   3.63820          osd.9             up   1.00000  1.00000

创建后端硬盘

make-bcache -B /dev/sdd /dev/sde

检查是否添加成功

[root@ceph-node-1 ceph-admin-node]# lsblk
NAME            MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
nvme0n1         259:0    0 447.1G  0 disk 
sdd               8:48   0   3.7T  0 disk 
└─bcache2       252:256  0   3.7T  0 disk 
sdb               8:16   0   3.7T  0 disk 
└─bcache0       252:0    0   3.7T  0 disk 
sde               8:64   0   3.7T  0 disk 
└─bcache3       252:384  0   3.7T  0 disk 
sdc               8:32   0   3.7T  0 disk 
└─bcache1       252:128  0   3.7T  0 disk 
sda               8:0    0 232.4G  0 disk 
├─sda2            8:2    0 231.9G  0 part 
│ └─centos-root 253:0    0 231.9G  0 lvm  /
└─sda1            8:1    0   512M  0 part /boot

绑定缓存

echo e5291f51-32ae-4d4e-95ce-0e646d41dfc4 > /sys/block/bcache2/bcache/attach
echo e5291f51-32ae-4d4e-95ce-0e646d41dfc4 > /sys/block/bcache3/bcache/attach

添加osd

ceph-deploy osd create ceph-node-1 --data /dev/bcache2 /dev/bcache3

检查状态

[root@ceph-node-1 ~]# cat /sys/block/bcache0/bcache/state
dirty
[root@ceph-node-1 ~]# cat /sys/block/bcache1/bcache/state
dirty
[root@ceph-node-1 ~]# cat /sys/block/bcache2/bcache/state
clean
[root@ceph-node-1 ~]# cat /sys/block/bcache3/bcache/state
clean
[root@ceph-node-1 ~]# 
[root@ceph-node-1 ~]# cat /sys/block/bcache0/bcache/dirty_data
12.5G
[root@ceph-node-1 ~]# cat /sys/block/bcache1/bcache/dirty_data
11.0G
[root@ceph-node-1 ~]# cat /sys/block/bcache2/bcache/dirty_data
0.0k
[root@ceph-node-1 ~]# cat /sys/block/bcache3/bcache/dirty_data
0.0k

state的几个状态:

no cache:该backing device没有attach任何caching device
clean:一切正常,缓存是干净的
dirty:一切正常,已启用回写,缓存是脏的
inconsistent:遇到问题,后台设备与缓存设备不同步


CEPH停用Bcache

  1. 停止OSD
  2. 解绑Bcache
  3. 观察Bcache缓存回写完为0时启用OSD

标签: Ceph, Bcache

添加新评论