这篇文章主要讲解了“Ceph怎么添加删除监视器”,文中的讲解内容简单清晰,易于学习与理解,下面请大家跟着小编的思路慢慢深入,一起来研究和学习“Ceph怎么添加删除监视器”吧!
1.环境准备1.1.已有环境
创新互联专注于企业成都营销网站建设、网站重做改版、承德网站定制设计、自适应品牌网站建设、H5场景定制、商城网站开发、集团公司官网建设、外贸网站建设、高端网站制作、响应式网页设计等建站业务,价格优惠性价比高,为承德等各大城市提供网站开发制作服务。
已有三节点ceph集群,有3个mon,现在再添加一个mon# ceph -s cluster 520d715f-adb5-4a6a-afb2-dcf586308166 health HEALTH_OK monmap e3: 3 mons at {hadoop001=10.10.1.32:6789/0,hadoop002=10.10.1.33:6789/0,hadoop003=10.10.1.34:6789/0} election epoch 1850, quorum 0,1,2hadoop001,hadoop002,hadoop003 osdmap e127: 4 osds: 4 up, 4 in flags sortbitwise pgmap v22405: 64 pgs, 1 pools, 0 bytes data, 0 objects 145 MB used, 334 GB / 334 GB avail 64 active+clean1.2.系统环境
要添加新的mon节点,那新节点的系统环境也需要配置与原有环境一致,这里只简单列下需要配置的列表,不多做赘述:
主机名、/etc/hosts、ssh互信、防火墙、时间同步、Selinux、最大进程数、文件句柄数、最大线程数、ceph的yum源2.使用ceph-deploy操作2.1.使用ceph-deploy添加mon
系统环境配置好后,在新增的mon节点上安装ceph软件# yum install ceph
在原有mon节点上使用ceph-deploy直接创建新的mon
注意:配置文件中需要配置public_network,否则可能会添加失败# ceph-deploy mon create hadoop004[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf[ceph_deploy.cli][INFO ] Invoked (1.5.34): /usr/bin/ceph-deploy mon create hadoop004[ceph_deploy.cli][INFO ] ceph-deploy options:[ceph_deploy.cli][INFO ] username : None[ceph_deploy.cli][INFO ] verbose : False[ceph_deploy.cli][INFO ] overwrite_conf : False[ceph_deploy.cli][INFO ] subcommand : create[ceph_deploy.cli][INFO ] quiet : False[ceph_deploy.cli][INFO ] cd_conf :
# ceph -s cluster 520d715f-adb5-4a6a-afb2-dcf586308166 health HEALTH_OK monmap e4: 4 mons at {hadoop001=10.10.1.32:6789/0,hadoop002=10.10.1.33:6789/0,hadoop003=10.10.1.34:6789/0,hadoop004=10.10.1.36:6789/0} election epoch 1850, quorum 0,1,2,3 hadoop001,hadoop002,hadoop003,hadoop004 osdmap e127: 4 osds: 4 up, 4 in flags sortbitwise pgmap v22405: 64 pgs, 1 pools, 0 bytes data, 0 objects 145 MB used, 334 GB / 334 GB avail 64 active+clean
添加完成后在ceph.conf中的mon_initial_members和mon_host参数中分别添加新mon节点的hostname和ip地址2.2.使用ceph-deploy删除mon# ceph-deploy mon destroy hadoop004[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf[ceph_deploy.cli][INFO ] Invoked (1.5.34): /usr/bin/ceph-deploy mon destroy hadoop004[ceph_deploy.cli][INFO ] ceph-deploy options:[ceph_deploy.cli][INFO ] username : None[ceph_deploy.cli][INFO ] verbose : False[ceph_deploy.cli][INFO ] overwrite_conf : False[ceph_deploy.cli][INFO ] subcommand : destroy[ceph_deploy.cli][INFO ] quiet : False[ceph_deploy.cli][INFO ] cd_conf :
彻底清理,操作需慎重:
注意:这个会删除mon节点hadoop004上所有的ceph数据,配置文件以及rpm包# ceph-deploy purge hadoop004
如果觉得删除的不干净,可以再去hadoop004上删除遗留目录# rm -rf /var/lib/ceph# rm -rf /var/run/ceph/*3.手动操作
上一章中已经将hadoop004的mon清理干净了。3.1.手动添加mon
hadoop004 上安装软件,创建mon目录[root@hadoop004 ~]# yum install ceph
hadoop001上将ceph.conf和客户端密钥拷贝到hadoop004的/etc/ceph目录# scp ceph.conf ceph.client.admin.keyring hadoop004:/etc/ceph/
hadoop004上: 获取mon密钥环# mkdir dlw# cd dlw/# ceph auth get mon. -o keyingexported keyring for mon.
获取监视器运行图# ceph mon getmap -o monmapgot monmap epoch 5
创建监视器数据目录# ceph-mon -i hadoop004 --mkfs --monmap monmap --keyring keying ceph-mon: set fsid to 520d715f-adb5-4a6a-afb2-dcf586308166ceph-mon: created monfs at /var/lib/ceph/mon/ceph-hadoop004 for mon.hadoop004
启动新监视器# ceph-mon -i hadoop004 --public-addr 10.10.1.36:6789
检查状态# ceph -s cluster 520d715f-adb5-4a6a-afb2-dcf586308166 health HEALTH_OK monmap e6: 4 mons at {hadoop001=10.10.1.32:6789/0,hadoop002=10.10.1.33:6789/0,hadoop003=10.10.1.34:6789/0,hadoop004=10.10.1.36:6789/0} election epoch 1854, quorum 0,1,2,3 hadoop001,hadoop002,hadoop003,hadoop004 osdmap e127: 4 osds: 4 up, 4 in flags sortbitwise pgmap v22405: 64 pgs, 1 pools, 0 bytes data, 0 objects 145 MB used, 334 GB / 334 GB avail 64 active+clean
发现集群已经成功四个mon了,但是到这里并没有完,ceph强在自我修复能力很强,总不能每次启动新的mon都要自己手动执行ceph-mon。
添加ceph-mon@hadoop004服务
先找到刚刚启动的mon进程,终止掉。# ps -ef |grep cephroot 30514 1 0 18:25 pts/1 00:00:00 ceph-mon -i hadoop004 --public-addr 10.10.1.36:6789root 30899 9739 0 18:30 pts/1 00:00:00 grep --color=auto ceph# kill 30514
在ceph.conf中的mon_initial_members和mon_host参数中分别添加新mon节点的hostname和ip地址
启动服务之前,需要修改mon数据目录的权限为ceph# cd /var/lib/ceph/mon# chown -R ceph:ceph ceph-hadoop004/
启动mon服务# systemctl reset-failed ceph-mon@hadoop004.service# systemctl restart ceph-mon@`hostname` # systemctl enable ceph-mon@`hostname` # systemctl restart ceph-mon.target# systemctl status ceph-mon@`hostname`● ceph-mon@hadoop004.service - Ceph cluster monitor daemon Loaded: loaded (/usr/lib/systemd/system/ceph-mon@.service; enabled; vendor preset: disabled) Active: active (running) since 三 2017-08-02 18:37:36 CHOST; 3s ago Main PID: 31115 (ceph-mon) CGroup: /system.slice/system-ceph\x2dmon.slice/ceph-mon@hadoop004.service └─31115 /usr/bin/ceph-mon -f --cluster ceph --id hadoop004 --setuser ceph --setgroup ceph8月 02 18:37:36 hadoop004 systemd[1]: Started Ceph cluster monitor daemon.8月 02 18:37:36 hadoop004 systemd[1]: Starting Ceph cluster monitor daemon...8月 02 18:37:36 hadoop004 ceph-mon[31115]: starting mon.hadoop004 rank 3 at 10.10.1.36:6789/0 mon_data /var/lib/ceph/mon/ceph-hadoop004 fsid 520d715f-adb5-4a6a-afb2-dcf5863081663.2.手动删除mon# ceph mon remove hadoop004Error EINVAL: removing mon.hadoop004 at 10.10.1.36:6789/0, there will be 3 monitors
清理数据目录,卸载软件包
# rm -rf /var/lib/ceph# rm -rf /var/run/ceph/*# yum remove ceph4.命令积累查看集群mon的选取情况# ceph quorum_status -f json-pretty 获取monmap# ceph-mon -i `hostname` --inject-monmap /opt/monmap 查看monmap# monmaptool --print /opt/monmap 在monmap中添加mon# monmaptool --add hadoop004 10.10.1.36:6789在monmap中删除mon# monmaptool /tmp/monmap --rm hadoop004注入monmap,注入之前要停止所有的mon# systemctl stop ceph-mon@`hostname`# ceph-mon -i `hostname` --inject-monmap /opt/monmap