欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页  >  科技

部署ceph存储集群及块设备测试

程序员文章站 2022-07-11 18:55:25
集群环境 配置基础环境 #添加ceph.repowget -O /etc/yum.repos.d/ceph.repo https://raw.githubusercontent.com/aishangwei/ceph-demo/master/ceph-deploy/ceph.repoyum make ......

 

 

集群环境

部署ceph存储集群及块设备测试

配置基础环境

#添加ceph.repo
wget -o /etc/yum.repos.d/ceph.repo https://raw.githubusercontent.com/aishangwei/ceph-demo/master/ceph-deploy/ceph.repo
yum makecache

#配置ntp
yum -y install ntpdate ntp
ntpdate cn.ntp.org.cn
systemctl restart ntpd ntpdate;systemctl enable ntpd ntpdate

#创建用户和ssh免密登录
useradd ceph-admin
echo "ceph-admin"|passwd --stdin ceph-admin
echo "ceph-admin all = (root) nopasswd:all" | sudo tee /etc/sudoers.d/ceph-admin
sudo chmod 0440 /etc/sudoers.d/ceph-admin

#配置host解析
cat >>/etc/hosts<<eof
10.1.10.201 ceph01
10.1.10.202 ceph02
10.1.10.203 ceph03
eof

#配置sudo不需要tty
sed -i 's/default requiretty/#default requiretty/' /etc/sudoers

 

使用ceph-deploy部署集群

#配置免密登录
su - ceph-admin
ssh-keygen
ssh-copy-id ceph-admin@ceph01
ssh-copy-id ceph-admin@ceph02
ssh-copy-id ceph-admin@ceph03

#安装ceph-deploy
sudo yum install -y ceph-deploy python-pip

#部署节点
mkdir my-cluster;cd my-cluster
ceph-deploy new ceph01 ceph02 ceph03

#编辑ceph.conf配置文件
echo >>/home/ceph-admin/my-cluster/ceph.conf<<eof
public network = 10.1.10.0/16
cluster network = 10.1.10.0/16
eof

#安装ceph包(代替ceph-deploy install node1 node2,下面命令需要在每台node上安装)
sudo yum install -y ceph ceph-radosgw

#配置初始monitor(s),收集所有密钥
ceph-deploy mon create-initial
ls -l *.keyring

#把配置信息拷贝到各节点
ceph-deploy admin ceph01 ceph02 ceph03

#配置osd
su - ceph-admin
cd /home/my-cluster

for dev in /dev/sdb /dev/sdc /dev/sdd
do
ceph-deploy disk zap ceph01 $dev
ceph-deploy osd create ceph01 --data $dev
ceph-deploy disk zap ceph02 $dev
ceph-deploy osd create ceph02 --data $dev
ceph-deploy disk zap ceph03 $dev
ceph-deploy osd create ceph03 --data $dev
done

#部署mgr,luminous版以后才需要部署
ceph-deploy mgr create ceph01 ceph02 ceph03

#开启dashboard模块
sudo chown -r ceph-admin /etc/ceph/
ceph mgr module enable dashboard
netstat -lntup|grep 7000

http://10.1.10.201:7000

 

配置ceph块存储

 

#检查是否复合块设备环境要求
uname -r
modprobe rbd
echo $?
#创建池和块设备
ceph osd lspools
ceph osd pool create rbd 128
确定pg_num取值是强制性的,因为不能自动计算,下面是几个常用的值
少于5个osd时,pg_num设置为128
osd数量在5到10个时,pg_num设置为512
osd数量在10到50个时,pg_num设置为4096
osd数量大于50时,理解权衡方法、以及如何自己计算pg_num取值

#客户端创建块设备
rbd create rbd1 --size 1g --image-feature layering --name client.admin

#映射块设备
rbd map --image rbd1 --name client.admin

#创建文件系统并挂载
fdisk -l /dev/rbd0
mkfs.xfs /dev/rbd0
mkdir /mnt/ceph-disk1
mount /dev/rbd0 /mnt/ceph-disk1
df -h /mnt/ceph-disk1

#写入数据测试
dd if=/dev/zero of=/mnt/ceph-disk1/file1 count=100 bs=1m

#采用fio软件压力测试
fio -direct=1 -iodepth=1 -rw=read -ioengine=libaio -bs=2k -size=100g -numjobs=128 -runtime=30 -group_reporting -
filename=/dev/rbd0 -name=readiops
fio -direct=1 -iodepth=1 -rw=write -ioengine=libaio -bs=2k -size=100g -numjobs=128 -runtime=30 -group_reporting -
filename=/dev/rbd0 -name=writeiops
fio -direct=1 -iodepth=1 -rw=randread -ioengine=libaio -bs=2k -size=100g -numjobs=128 -runtime=30 -group_reporting -
filename=/dev/rbd0 -name=randreadiops
fio -direct=1 -iodepth=1 -rw=randwrite -ioengine=libaio -bs=2k -size=100g -numjobs=128 -runtime=30 -group_reporting -
filename=/dev/rbd0 -name=randwriteiops