最近主要投入在平安云MySQL DRDS的项目中,本人主要负责zookeeper模块的配置及后台脚本的编写。为了加深对DRDS整体架构的认识,我觉得有必要手工搭建基于Mycat实现的Mysql分布式数据库集群,现和大家分享下搭建流程,后续会针对Mycat的各项配置作进一步研究,敬请期待。
Mycat及MySQL实例部署情况:
Mycat:
IP:10.20.8.57,Port:3310/3311
MySQL :
db1-M1,IP:10.20.8.126,Port:3306
db1-M2,IP:10.20.8.126,Port:3307
db2-M1,IP:10.25.80.7,Port:3307
架构图如下:
专注于为中小企业提供成都网站设计、成都网站建设服务,电脑端+手机端+微信端的三站合一,更高效的管理,为中小企业青阳免费做网站提供优质的服务。我们立足成都,凝聚了一批互联网行业人才,有力地推动了成百上千企业的稳健成长,帮助中小企业通过网站建设实现规模扩充和转变。
配置Mycat
server.xml:
1
0
druidparser
2
0
3310
3311
0
1
1m
1k
0
389m
123456
db
false
schema.xml:
select user();
select user();
修改rule.xml中下列配置项:
id
mod-long
2
配置文件中各标签的含义可参考文章:MyCat关键配置说明
启动Mycat:
[root@SZB-L0059021 bin]# ./mycat start
Starting Mycat-server...
[root@SZB-L0059021 bin]# ./mycat status
Mycat-server is running (27020).
[root@SZB-L0059021 bin]# mysql -uroot -p123456 -Ddb -h227.0.0.1 -P3310
分库分表验证:
mysql> show tables;
+--------------+
| Tables in db |
+--------------+
| t1 |
+--------------+
1 row in set (0.00 sec)
mysql> desc t1;
+---------+-------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+---------+-------------+------+-----+---------+-------+
| id | int(11) | YES | | NULL | |
| db_name | varchar(20) | YES | | NULL | |
+---------+-------------+------+-----+---------+-------+
2 rows in set (0.01 sec)
mysql> insert into t1(id,db_name) values(1,database());
Query OK, 1 row affected (0.01 sec)
mysql> insert into t1(id,db_name) values(2,database());
Query OK, 1 row affected (0.03 sec)
mysql> select * from t1;
+------+---------+
| id | db_name |
+------+---------+
| 2 | db1 | --id=2,对2取模为0,所以插入dn1
| 1 | db2 | --id=1,对2取模为1,所以插入dn2
+------+---------+
2 rows in set (0.01 sec)
上述查询结果可知,两次插入的数据分落入了db1、db2上,实现了分库
读写分离验证:
mysql> select * from t1;
+------+---------+
| id | db_name |
+------+---------+
| 2 | db1 | --来自shard1上的db1-M2
| 1 | db2 | --来自shard2上的db2-M1
+------+---------+
2 rows in set (0.01 sec)
查看日志可知,上述查询结果来自dn1中db1-M2(端口3307)和dn2中的db2-M1:
2018-05-08 15:03:39.385 DEBUG [$_NIOREACTOR-0-RW] (io.mycat.server.NonBlockingSession.execute(NonBlockingSession.java:110)) - ServerConnection [id=1, schema=db, host=127.0.0.1, user=root,txIsolation=3, autocommit=true, schema=db]select * from t1, route={
1 -> dn1{SELECT *
FROM t1
LIMIT 100}
2 -> dn2{SELECT *
FROM t1
LIMIT 100}
}
...
2018-05-08 15:03:39.391 DEBUG [$_NIOREACTOR-0-RW] (io.mycat.backend.mysql.nio.handler.MultiNodeQueryHandler.rowEofResponse(MultiNodeQueryHandler.java:311)) - on row end reseponse MySQLConnection [id=29, lastTime=1525763019368, user=root, schema=db1, old shema=db1, borrowed=true, fromSlaveDB=true, threadId=511, charset=utf8, txIsolation=3, autocommit=true, attachment=dn1{SELECT *
FROM t1
LIMIT 100}, respHandler=io.mycat.backend.mysql.nio.handler.MultiNodeQueryHandler@66328ec4, host=10.20.8.126, port=3307, statusSync=null, writeQueue=0, modifiedSQLExecuted=false]
...
2018-05-08 15:03:39.392 DEBUG [$_NIOREACTOR-0-RW] (io.mycat.backend.mysql.nio.handler.MultiNodeQueryHandler.rowEofResponse(MultiNodeQueryHandler.java:311)) - on row end reseponse MySQLConnection [id=3, lastTime=1525763019387, user=root, schema=db2, old shema=db2, borrowed=true, fromSlaveDB=false, threadId=28, charset=utf8, txIsolation=3, autocommit=true, attachment=dn2{SELECT *
FROM t1
LIMIT 100}, respHandler=io.mycat.backend.mysql.nio.handler.MultiNodeQueryHandler@66328ec4, host=10.25.80.7, port=3307, statusSync=null, writeQueue=0, modifiedSQLExecuted=false]
...
主从切换:
通过查看mycat/conf目录下的dnindex.properties文件,可以知道mycat正在使用的writeHost,0表示schema.xml中dataHost标签下的第一个writeHost。
[root@SZB-L0059021 conf]# cat dnindex.properties
#update
#Tue May 08 12:59:24 CST 2018
shard2=0
shard1=0 --此时状态正常,mycat选取每个dataHost标签中的第一个writeHost作为写入入口
切换至10.20.8.126主机,并手动停掉db1-M1
10.20.8.126:3306:Master > mysqladmin -uroot -p123456 shutdown
回到10.20.8.57(mycat主机),再次查看dnindex.properties
[root@SZB-L0059021 conf]# cat dnindex.properties
#update
#Tue May 08 15:12:12 CST 2018
shard2=0
shard1=1 --db1-M1被shutdown后,mycat在shard1上的writeHost切换至db1-M2
切换至10.20.8.57(mycat所在主机),执行下述insert命令
mysql> insert into t1(id,db_name) values(4,database());
Query OK, 1 row affected (0.01 sec)
查看日志可知,Mycat此时选择通过db1-M2(端口3307)写入数据:
2018-05-08 15:13:44.987 DEBUG [$_NIOREACTOR-0-RW] (io.mycat.server.NonBlockingSession.releaseConnection(NonBlockingSession.java:341)) - release connection MySQLConnection [id=24, lastTime=1525763624968, user=root, schema=db1, old shema=db1, borrowed=true, fromSlaveDB=false, threadId=506, charset=utf8, txIsolation=3, autocommit=true, attachment=dn1{insert into t1(id,db_name) values(4,database())}, respHandler=SingleNodeHandler [node=dn1{insert into t1(id,db_name) values(4,database())}, packetId=1], host=10.20.8.126, port=3307, statusSync=null, writeQueue=0, modifiedSQLExecuted=true]
由于dataHost中配置了writeType="0",所以即使db1-M1重启恢复后,Mycat仍然会选择db1-M2作为shard1的
writeHost。
验证:
切换至10.20.8.126主机,并手动启动db1-M1
10.20.8.126:3306:Master > mysqld_safe &
在Mycat中插入验证数据
mysql> insert into t1(id,db_name) values(6,database());
Query OK, 1 row affected (0.02 sec)
查看日志可知,数据仍通过db1-M2(端口3307)写入:
2018-05-08 15:16:09.579 DEBUG [$_NIOREACTOR-0-RW] (io.mycat.server.NonBlockingSession.releaseConnection(NonBlockingSession.java:341)) - release connection MySQLConnection [id=32, lastTime=1525763769548, user=root, schema=db1, old shema=db1, borrowed=true, fromSlaveDB=false, threadId=514, charset=utf8, txIsolation=3, autocommit=true, attachment=dn1{insert into t1(id,db_name) values(6,database())}, respHandler=SingleNodeHandler [node=dn1{insert into t1(id,db_name) values(6,database())}, packetId=1], host=10.20.8.126, port=3307, statusSync=null, writeQueue=0, modifiedSQLExecuted=true]
此时要想让Mycat在shard1上的writeHost重新变为db1-M1,只需修改dnindex.properties中的shard1=1为shard1=0,并重启Mycat即可。