垂直拆分
垂直拆分就是要把表按模块划分到不同
数据库表中(当然原则还是不破坏第三范式),这种拆分在大型网站的演变过程中是很常见的。当一个网站还在很小的时候,只有小量的人来开发和维护,各模块和表都在一起,当网站不断丰富和壮大的时候,也会变成多个子系统来支撑,这时就有按模块和功能把表划分出来的需求。其实,相对于垂直切分更进一步的是服务化改造,说得简单就是要把原来强耦合的系统拆分成多个弱耦合的服务,通过服务间的调用来满足业务需求看,因此表拆出来后要通过服务的形式暴露出去,而不是直接调用不同模块的表,淘宝在架构不断演变过程,最重要的一环就是服务化改造,把用户、交易、店铺、宝贝这些核心的概念抽取成独立的服务,也非常有利于进行局部的优化和治理,保障核心模块的稳定性
垂直拆分:单表大数据量依然存在性能瓶颈
水平拆分
上面谈到垂直切分只是把表按模块划分到不同数据库,但没有解决单表大数据量的问题,而水平切分就是要把一个表按照某种规则把数据划分到不同表或数据库里。例如像计费系统,通过按时间来划分表就比较合适,因为系统都是处理某一时间段的数据。而像SaaS应用,通过按用户维度来划分数据比较合适,因为用户与用户之间的隔离的,一般不存在处理多个用户数据的情况,简单的按user_id范围来水平切分
通俗理解:水平拆分行,行数据拆分到不同表中, 垂直拆分列,表数据拆分到不同表中
一、Amoeba For Mysql
1. Amoeba For Mysql 简介 Amoeba(变形虫)项目,该开源框架于2008年开始发布一款 Amoeba for Mysql软件。这个软件致力于MySQL的分布式数据库前端代理层,它主要在应用层访问MySQL的 时候充当SQL路由功能,专注于分布式数据库代理(Database Proxy)开发。座落与 Client、DB Server(s)之间,对客户端透明。具有负载均衡、高可用性、SQL 过滤、读写分离、可路由相关的到目标数据库、可并发请求多台数据库合并结果。 通过Amoeba你能够完成多数据源的高可用、负载均衡、数据切片的功能,目前Amoeba已在很多 企业的生产线上面使用。
Amoeba总共有7个配置文件,分别如下:
Amoeba主配置文件($AMOEBA_HOME/conf/amoeba.xml),用来配置Amoeba服的基本参数,如Amoeba主机地址、端口、认证方式、用于连接的用户名、密码、线程数、超时时间、其他配置文件的位置等。
数据库服务器配置文件($AMOEBA_HOME/conf/dbServers.xml),用来存储和配置Amoeba所代理的数据库服务器的信息,如:主机IP、端口、用户名、密码等。
切分规则配置文件($AMOEBA_HOME/conf/rule.xml),用来配置切分规则。
数据库函数配置文件($AMOEBA_HOME/conf/functionMap.xml),用来配置数据库函数的处理方法,Amoeba将使用该配置文件中的方法解析数据库函数。
切分规则函数配置文件($AMOEBA_HOME/conf/ruleFunctionMap.xml),用来配置切分规则中使用的用户自定义函数的处理方法。
访问规则配置文件($AMOEBA_HOME/conf/access_list.conf),用来授权或禁止某些服务器IP访问Amoeba。
日志规格配置文件($AMOEBA_HOME/conf/log4j.xml),用来配置Amoeba输出日志的级别和方式。
2. Amoeba For Mysql 环境介绍
1. Amoeba 服务器:IP:192.168.1.248
2.Mysql服务器(master):IP:192.168.1.228
3. Mysql服务器(slave):IP:192.168.1.238
二、Amoeba的安装流程及问题描述
1.安装流程
1.下载AMOEBA软件
#mkdir /usr/local/amoeba/
#cd /usr/local/amoeba/
#wget http://sourceforge.net/projects/amoeba/files
2.安装AMOEBA依赖关系
安装java支持
3.安装AMOEBA软件
#cd /usr/local/amoeba/
#tar -zxvf amoeba-mysql-binary-2.2.0.tar.gz
#vi /etc/profile
export AMOEBA_HOME=/usr/local/amoeba
#. /etc/profile
#cd bin/
#chmod 755 *
#./amoeba
The stack size specified is too small, Specify at least 160k
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit
# vi amoeba
修改58行的Xss参数:
DEFAULT_OPTS="-server -Xms256m -Xmx256m -Xss128k"
修改为:
DEFAULT_OPTS="-server -Xms256m -Xmx256m -Xss256k"
#./amoeba
//如果显示下面的内容,证明amoeba的依耐关系安装完成
amoeba start|stop
4.修改配置文件(这里我们主要修改amoeba.xml和dbServer.xml文件)
<?xml version="1.0" encoding="gbk"?>
<!DOCTYPE amoeba:configuration SYSTEM "amoeba.dtd">
<amoeba:configuration xmlns:amoeba="http://amoeba.meidusa.com/">
<proxy>
<!-- service class must implements com.meidusa.amoeba.service.Service -->
<service name="Amoeba for Mysql" class="com.meidusa.amoeba.net.ServerableConnectionManager">
<!-- port -->
<property name="port">8066</property>
<!-- bind ipAddress -->
<!--
<property name="ipAddress">127.0.0.1</property>
-->
<property name="manager">${clientConnectioneManager}</property>
<property name="connectionFactory">
<bean class="com.meidusa.amoeba.mysql.net.MysqlClientConnectionFactory">
<property name="sendBufferSize">128</property>
<property name="receiveBufferSize">64</property>
</bean>
</property>
<property name="authenticator">
<bean class="com.meidusa.amoeba.mysql.server.MysqlClientAuthenticator">
<property name="user">root</property>
<property name="password">123456</property>
<property name="filter">
<bean class="com.meidusa.amoeba.server.IPAccessController">
<property name="ipFile">${amoeba.home}/conf/access_list.conf</property>
</bean>
</property>
</bean>
</property>
</service>
<!-- server class must implements com.meidusa.amoeba.service.Service -->
<service name="Amoeba Monitor Server" class="com.meidusa.amoeba.monitor.MonitorServer">
<!-- port -->
<!-- default value: random number
<property name="port">9066</property>
-->
<!-- bind ipAddress
<property name="ipAddress">127.0.0.1</property>-->
<property name="daemon">true</property>
<property name="manager">${clientConnectioneManager}</property>
<property name="connectionFactory">
<bean class="com.meidusa.amoeba.monitor.net.MonitorClientConnectionFactory"></bean>
</property>
</service>
<runtime class="com.meidusa.amoeba.mysql.context.MysqlRuntimeContext">
<!-- proxy server net IO Read thread size -->
<property name="readThreadPoolSize">20</property>
<!-- proxy server client process thread size -->
<property name="clientSideThreadPoolSize">30</property>
<!-- mysql server data packet process thread size -->
<property name="serverSideThreadPoolSize">30</property>
<!-- per connection cache prepared statement size -->
<property name="statementCacheSize">500</property>
<!-- query timeout( default: 60 second , TimeUnit:second) -->
<property name="queryTimeout">60</property>
</runtime>
</proxy>
<!--
Each ConnectionManager will start as thread
manager responsible for the Connection IO read , Death Detection
-->
<connectionManagerList>
<connectionManager name="clientConnectioneManager" class="com.meidusa.amoeba.net.MultiConnectionManagerWrapper">
<property name="subManagerClassName">com.meidusa.amoeba.net.ConnectionManager</property>
<!--
default value is avaliable Processors
<property name="processors">5</property>
-->
</connectionManager>
<connectionManager name="defaultManager" class="com.meidusa.amoeba.net.MultiConnectionManagerWrapper">
<property name="subManagerClassName">com.meidusa.amoeba.net.AuthingableConnectionManager</property>
<!--
default value is avaliable Processors
<property name="processors">5</property>
-->
</connectionManager>
</connectionManagerList>
<!-- default using file loader -->
<dbServerLoader class="com.meidusa.amoeba.context.DBServerConfigFileLoader">
<property name="configFile">${amoeba.home}/conf/dbServers.xml</property>
</dbServerLoader>
<queryRouter class="com.meidusa.amoeba.mysql.parser.MysqlQueryRouter">
<property name="ruleLoader">
<bean class="com.meidusa.amoeba.route.TableRuleFileLoader">
<property name="ruleFile">${amoeba.home}/conf/rule.xml</property>
<property name="functionFile">${amoeba.home}/conf/ruleFunctionMap.xml</property>
</bean>
</property>
<property name="sqlFunctionFile">${amoeba.home}/conf/functionMap.xml</property>
<property name="LRUMapSize">1500</property>
<property name="defaultPool">master</property>
<property name="writePool">master</property>
<property name="readPool">slavePool</property>
<property name="needParse">true</property>
</queryRouter>
</amoeba:configuration>
dbServer.xml文件
<?xml version="1.0" encoding="gbk"?>
<!DOCTYPE amoeba:dbServers SYSTEM "dbserver.dtd">
<amoeba:dbServers xmlns:amoeba="http://amoeba.meidusa.com/">
<!--
Each dbServer needs to be configured into a Pool,
If you need to configure multiple dbServer with load balancing that can be simplified by the following configuration:
add attribute with name virtual = "true" in dbServer, but the configuration does not allow the element with name factoryConfig
such as 'multiPool' dbServer
-->
<dbServer name="abstractServer" abstractive="true">
<factoryConfig class="com.meidusa.amoeba.mysql.net.MysqlServerConnectionFactory">
<property name="manager">${defaultManager}</property>
<property name="sendBufferSize">64</property>
<property name="receiveBufferSize">128</property>
<!-- mysql port -->
<property name="port">3306</property>
<!-- mysql schema -->
<property name="schema">sync</property>
<!-- mysql user -->
<property name="user">root</property>
<!-- mysql password-->
<property name="password">123456</property>
</factoryConfig>
<poolConfig class="com.meidusa.amoeba.net.poolable.PoolableObjectPool">
<property name="maxActive">500</property>
<property name="maxIdle">500</property>
<property name="minIdle">10</property>
<property name="minEvictableIdleTimeMillis">600000</property>
<property name="timeBetweenEvictionRunsMillis">600000</property>
<property name="testOnBorrow">true</property>
<property name="testOnReturn">true</property>
<property name="testWhileIdle">true</property>
</poolConfig>
</dbServer>
<dbServer name="master" parent="abstractServer">
<factoryConfig>
<!-- mysql ip -->
<property name="ipAddress">192.168.1.228</property>
</factoryConfig>
</dbServer>
<dbServer name="slave" parent="abstractServer">
<factoryConfig>
<!-- mysql ip -->
<property name="ipAddress">192.168.1.238</property>
</factoryConfig>
</dbServer>
<dbServer name="slavePool" virtual="true">
<poolConfig class="com.meidusa.amoeba.server.MultipleServerPool">
<!-- Load balancing strategy: 1=ROUNDROBIN , 2=WEIGHTBASED , 3=HA-->
<property name="loadbalance">1</property>
<!-- Separated by commas,such as: server1,server2,server1 -->
<property name="poolNames">master,slave,slave,slave</property>
</poolConfig>
</dbServer>
</amoeba:dbServers>
5.启动amoeba
#./amoeba start
二、 mysql配置mysql-proxy读写分离
[root@LVS /]yum -y install gcc* gcc-c++* autoconf* automake* zlib* libxml* ncurses-devel* libmcrypt* libtool* flex* pkgconfig* glib-*
[root@LVS /]# mkdir /usr/local/lua
[root@LVS /]# cd /usr/local/lua/
[root@LVS lua]# wget http://www.lua.org/ftp/lua-5.3.1.tar.gz
[root@LVS lua]# tar -zxvf lua-5.3.1.tar.gz
[root@LVS lua]# cd lua-5.3.1/
[root@LVS lua-5.3.1]# make linux
lua.c:80:31: error: readline/readline.h: No such file or directory
lua.c:81:30: error: readline/history.h: No such file or directory
[root@LVS lua-5.3.1]# yum install readline*
Another app is currently holding the yum lock; waiting for it to exit...
[root@LVS lua-5.3.1]#rm -rf /var/run/yum.pid
[root@LVS lua-5.3.1]# yum install readline*
[root@LVS lua-5.3.1]# make linux
[root@LVS lua-5.3.1]# make install
[root@LVS lua-5.3.1]# vi /etc/profile
export LUA_CFLAGS="-I/usr/local/include" LUA_LIBS="-L/usr/local/lib -llua -ldl" LDFLAGS="-lm"
[root@LVS lua-5.3.1]# . /etc/profile
[root@LVS lua-5.3.1]# wget ftp://mirror.switch.ch/mirror/mysql/Downloads/MySQL-Proxy/mysql-proxy-0.8.4.tar.gz
搞了半天,源码编译方式始终失败
最后还是采用压缩包形式
http://ftp.ntu.edu.tw/pub/MySQL/Downloads/MySQL-Proxy/
[root@LVS mysql-proxy]# tar -zxvf mysql-proxy-0.8.4-linux-el6-x86-64bit.tar.gz
[root@LVS mysql-proxy]# cd /usr/local/mysql-proxy/
[root@LVS mysql-proxy]# cd bin
[root@LVS bin]# ./mysql-proxy --daemon --keepalive --proxy-read-only-backend-addresses=192.168.1.238:3306 --proxy-backend-addresses=192.168.1.228:3306 --proxy-lua-script=/usr/local/mysql-proxy/share/doc/mysql-proxy/rw-splitting.lua >> /var/log/mysql-proxy.log &
说明
--help-all ———— 用于获取全部帮助信息
--proxy-address=host:port ———— 代理服务监听的地址和端口(缺省是4040)
--admin-address=host:port ———— 指定管理主机地址和端口(缺省是4041)
--proxy-backend-addresses=host:port ——后端mysql服务器的地址和端口(主服务器) 简写:-b
--proxy-read-only-backend-addresses=host:port ———— 后端只读mysql服务器的地址和端口(从服务器)多个空格隔开,使用rr算法。简写:-r
--proxy-lua-script=file ———— 指定mysql代理功能的Lua脚本文件
--daemon ———— 以守护进程模式启动mysql-proxy
--defaults-file=/path/to/conf_file_name ———— 默认使用的配置文件路径
--log-file=/path/to/log_file_name ———— 日志文件名称
--log-level=level ———— 日志级别
--log-use-syslog ———— 基于syslog记录日志
--user=user_name ———— 运行mysql-proxy进程的用户
--admin-username=user ———— 指定登录到mysql-proxy管理界面的用户名
--admin-password=pass ———— 指定登录到mysql-proxy管理界面的用户密码
--admin-lua-script=script-file ————管理模块的lua脚本文件路径(创建管理接口)
--defaults-file ———— 指定参数文件的路径
--plugins=admin ———— 加载管理插件
使用方法
2个mysql服务器的情形
mysql-proxy \
--proxy-backend-addresses=mysql_ip1:3306 \
--proxy-backend-addresses=mysql_ip2:3306
3个服务器的情形:一个主服务器,负责写入;2个从服务器,负责查询。
mysql-proxy --proxy-backend-addresses= :3306\
--proxy-read-only-address=:3306 \
--proxy-read-only-address=:3306
关于自动启动脚本没有编写,如果需要可以自己搜索
[root@Proxy mysql-proxy]# vi /etc/init.d/mysql-proxy
#!/bin/sh
#
# mysql-proxy This script starts and stops the mysql-proxy daemon
#
# chkconfig: - 78 30
# processname: mysql-proxy
# description: mysql-proxy is a proxy daemon to mysql
# Source function library.
. /etc/rc.d/init.d/functions
#PROXY_PATH=/usr/local/bin
PROXY_PATH=/usr/local/mysql-proxy/bin
prog="mysql-proxy"
# Source networking configuration.
. /etc/sysconfig/network
# Check that networking is up.
[ ${NETWORKING} = "no" ] && exit 0
# Set default mysql-proxy configuration.
#PROXY_OPTIONS="--daemon"
PROXY_OPTIONS="--proxy-read-only-backend-addresses=192.168.1.238:3306 --proxy-backend-addresses=192.168.1.228:3306 --proxy-lua-script=/usr/local/mysql-proxy/share/doc/mysql-proxy/rw-splitting.lua"
PROXY_PID=/usr/local/mysql-proxy/run/mysql-proxy.pid
# Source mysql-proxy configuration.
if [ -f /etc/sysconfig/mysql-proxy ]; then
. /etc/sysconfig/mysql-proxy
fi
PATH=$PATH:/usr/bin:/usr/local/bin:$PROXY_PATH
# By default it's all good
RETVAL=0
# See how we were called.
case "$1" in
start)
# Start daemon.
echo -n $"Starting $prog: "
$NICELEVEL $PROXY_PATH/mysql-proxy $PROXY_OPTIONS --daemon --pid-file=$PROXY_PID --user=root --log-level=debug --log-file=/usr/local/mysql-proxy/log/mysql-proxy.log
RETVAL=$?
echo
if [ $RETVAL = 0 ]; then
touch /var/lock/subsys/mysql-proxy]
echo "ok"
fi
;;
stop)
# Stop daemons.
echo -n $"Stopping $prog: "
killproc $prog
RETVAL=$?
echo
if [ $RETVAL = 0 ]; then
rm -f /var/lock/subsys/mysql-proxy
rm -f $PROXY_PID
fi
;;
restart)
$0 stop
sleep 3
$0 start
;;
condrestart)
[ -e /var/lock/subsys/mysql-proxy ] && $0 restart
;;
status)
status mysql-proxy
RETVAL=$?
;;
*)
echo "Usage: $0 {start|stop|restart|status|condrestart}"
RETVAL=1
;;
esac
exit $RETVAL
三、mycat 安装测试
mycat源码:http://code.taobao.org/svn/openclouddb/
[root@LVS local]# mkdir mycat
[root@LVS local]# mv Mycat-server-1.2.2-20140819-532-linux.tar.gz mycat/
[root@LVS local]# cd mycat/
[root@LVS mycat]# ls
Mycat-server-1.2.2-20140819-532-linux.tar.gz
[root@LVS mycat]# tar -zxvf Mycat-server-1.2.2-20140819-532-linux.tar.gz
修改wrapper.conf
# Java Application
wrapper.java.command=/usr/java/jdk1.7.0_72/bin/java
server.xml
<?xml version="1.0" encoding="UTF-8"?>
<!-- - - Licensed under the Apache License, Version 2.0 (the "License");
- you may not use this file except in compliance with the License. - You
may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0
- - Unless required by applicable law or agreed to in writing, software -
distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the
License for the specific language governing permissions and - limitations
under the License. -->
<!DOCTYPE mycat:server SYSTEM "server.dtd">
<mycat:server xmlns:mycat="http://org.opencloudb/">
<system>
<property name="sequnceHandlerType">0</property>
<!--
<property name="mutiNodeLimitType">1</property> 0:开启小数量级(默认) ;1:开启亿级数据排序
<property name="mutiNodePatchSize">100</property> 亿级数量排序批量
<property name="processors">32</property> <property name="processorExecutor">32</property>
<property name="serverPort">8066</property> <property name="managerPort">9066</property>
<property name="idleTimeout">300000</property> <property name="bindIp">0.0.0.0</property>
<property name="frontWriteQueueSize">4096</property> <property name="processors">32</property> -->
</system>
<user name="root">
<property name="password">123456</property>
<property name="schemas">sync</property>
</user>
<!-- <user name="user">
<property name="password">user</property>
<property name="schemas">TESTDB</property>
<property name="readOnly">true</property>
</user>
<cluster> <node name="cobar1"> <property name="host">127.0.0.1</property>
<property name="weight">1</property> </node> </cluster> -->
<!-- <quarantine> <host name="1.2.3.4"> <property name="user">test</property>
</host> </quarantine> -->
</mycat:server>
schema.xml
<?xml version="1.0"?>
<!DOCTYPE mycat:schema SYSTEM "schema.dtd">
<mycat:schema xmlns:mycat="http://org.opencloudb/">
<schema name="sync" checkSQLschema="false" sqlMaxLimit="100" dataNode="dn1">
</schema>
<dataNode name="dn1" dataHost="localhost1" database="sync" />
<!--
<dataNode name="dn2" dataHost="localhost1" database="db2" />
<dataNode name="dn3" dataHost="localhost1" database="db3" />
<dataNode name="jdbc_dn1" dataHost="jdbchost" database="db1" />
<dataNode name="jdbc_dn2" dataHost="jdbchost" database="db2" />
<dataNode name="jdbc_dn3" dataHost="jdbchost" database="db3" />
-->
<dataHost name="localhost1" maxCon="1000" minCon="100" balance="1" writeType="0"
dbType="mysql" dbDriver="native">
<heartbeat>select user()</heartbeat>
<!-- can have multi write hosts -->
<writeHost host="192.168.1.228" url="192.168.1.228:3306" user="root"
password="123456">
<!-- can have multi read hosts -->
<readHost host="192.168.1.238" url="192.168.1.238:3306" user="root" password="123456" />
</writeHost>
<!-- <writeHost host="hostM2" url="localhost:3316" user="root" password="123456"/> -->
</dataHost>
<!--
<dataHost name="jdbchost" maxCon="1000" minCon="10" balance="0"
dbType="mysql" dbDriver="jdbc">
<heartbeat>select user()</heartbeat>
<writeHost host="hostM1" url="jdbc:mysql://localhost:3306"
user="root" password="123456">
</writeHost>
</dataHost>
-->
</mycat:schema>
[root@LVS mycat]# cd bin
[root@LVS bin]# ./startup_nowrap.sh
"/usr/java/jdk1.7.0_72/bin/java" -DMYCAT_HOME="/usr/local/mycat" -classpath "/usr/local/mycat/conf:/usr/local/mycat/lib/classes:/usr/local/mycat/lib/ehcache-core-2.6.6.jar:/usr/local/mycat/lib/fdb-sql-parser-1.3.0.jar:/usr/local/mycat/lib/log4j-1.2.17.jar:/usr/local/mycat/lib/mapdb-0.9.8.jar:/usr/local/mycat/lib/Mycat-server-1.2.2.jar:/usr/local/mycat/lib/slf4j-api-1.7.3.jar:/usr/local/mycat/lib/slf4j-log4j12-1.7.3.jar:/usr/local/mycat/lib/wrapper.jar" -server -Xms2G -Xmx2G -XX:MaxPermSize=64M -XX:+AggressiveOpts -XX:MaxDirectMemorySize=2G org.opencloudb.MycatStartup >> "/usr/local/mycat/logs/console.log" 2>&1 &
四、Atlas 安装测试
http://sofar.blog.51cto.com/353572/1601552/
Atlas会有两个版本,其中有个分表的版本,但是这个需要其他的依赖,一般场景下不需要分表这种需求,所以这里选择安装普通的版本:
Atlas (普通) : https://github.com/Qihoo360/Atlas/releases/download/2.2.1/Atlas-2.2.1.el6.x86_64.rpm
Atlas (分表) : https://github.com/Qihoo360/Atlas/releases/download/sharding-1.0.1/Atlas-sharding_1.0.1-el6.x86_64.rpm
rpm -vih https://github.com/Qihoo360/Atlas/releases/download/2.2.1/Atlas-2.2.1.el6.x86_64.rpm
#Atlas的安装目录
cd /usr/local/mysql-proxy/
./bin/encrypt 123456
#很多选项用默认值即可
vi conf/test.cnf
[mysql-proxy]
admin-username = sysadmin
admin-password = 123456
proxy-backend-addresses = 10.0.80.199:3306
#从库的IP和端口,@后面的数字代表权重,用来作负载均衡,若省略则默认为1,可设置多项,用逗号分隔。如果想让主库也能分担读请求的话,只需要将主库信息加入到下面的配置项中
proxy-read-only-backend-addresses = 10.0.80.200:3306,10.0.80.201:3306
pwds = root:/iZxz+0GRoA=,root:/iZxz+0GRoA=
daemon = true
keepalive = true
event-threads = 16
log-level = message
log-path = /usr/local/mysql-proxy/log
sql-log = ON
#代表客户端应该使用3306这个端口连接Atlas来发送SQL请求
proxy-address = 0.0.0.0:3306
#代表DBA应该使用3307这个端口连接Atlas来执行运维管理操作
admin-address = 10.0.80.202:3307
#分表设置,此例中db为库名,tb为表名,id为分表字段,3为子表数量,可设置多项,以逗号分隔,若不分表则不需要设置该项,子表需要事先建好,子表名称为表名_数字,数字范围为[0,子表数-1],如本例里,子表名称为db_0、db_1、db_2
#tables = db.tb.id.3
charset = utf8
#end
启动
./bin/mysql-proxyd test start
注意:
(1).运行文件是:mysql-proxyd(不是mysql-proxy)。(2).test是conf目录下配置文件的名字,也是配置文件里instance项的名字,三者需要统一。
(3).可以使用ps -ef | grep mysql-proxy查看Atlas是否已经启动或停止。
tail -f /usr/local/mysql-proxy/log/test.log
/usr/local/mysql/bin/mysql -h 10.0.80.202 -P 3306 -u root -p123456
/usr/local/mysql/bin/mysql -h 10.0.80.202 -P 3307 -u sysadmin -p123456