LVS+Keepalived+LAMP+Gluster系统搭建

LVS+Keepalived+LAMP+Gluster系统搭建

说明:本文是在利用工作之余借鉴网上各位大牛经验,在本机上搭建的一个小型系统环境,并将其整理;由于文采有限,可能文章有很多不妥之处,很多地方需要后续补充!!


一、系统架构及规划

1、系统架构图


2、系统规划



3、IP地址规划



二、搭建Yum仓库

1、挂载镜像,并拷贝到Master服务器

<span style="font-size:14px;">[root@test ~]# mkdir  /app/yum_d
[root@test ~]# mount /dev/sr0 /media/
mount: block device /dev/sr0 is write-protected, mounting read-only
[root@test ~]# cd /media/
[root@test media]# ls
EFI      EULA_en  EULA_it  EULA_pt  HighAvailability  LoadBalancer  README         ResilientStorage            ScalableFileSystem
EULA     EULA_es  EULA_ja  EULA_zh  images            media.repo    release-notes  RPM-GPG-KEY-redhat-beta     Server
EULA_de  EULA_fr  EULA_ko  GPL      isolinux          Packages      repodata       RPM-GPG-KEY-redhat-release  TRANS.TBL
[root@test media]# cp -rf /media/ /app/yum_d/
[root@test media]# service httpd status
httpd is stopped</span><span style="font-size:24px;">
</span>


2、配置http服务

<span style="font-size:14px;">[root@test media]# vi /etc/httpd/conf/httpd.conf ---在配置文件最后添加如下内容
Alias /kbsonlong "/app/yum_d"          --/kbsonlong  访问的别名  /app/yum_d 存放软件目录
<Directory "/app/yum_d">
Options Indexes MultiViews FollowSymlinks
AllowOverride None
Allow from all
</Directory>
[root@test media]# service httpd start
Starting httpd: httpd: apr_sockaddr_info_get() failed for test
httpd: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1 for ServerName
                                                           [  OK  ]
[root@test media]# netstat -ntul
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address               Foreign Address             State      
tcp        0      0 0.0.0.0:22                  0.0.0.0:*                   LISTEN      
tcp        0      0 127.0.0.1:25                0.0.0.0:*                   LISTEN      
tcp        0      0 :::22                       :::*                        LISTEN      
tcp        0      0 ::1:25                      :::*                        LISTEN      
tcp        0      0 :::80                       :::*                        LISTEN  </span><strong style="font-size:24px;"> 
</strong>

3、测试是否正常访问

<span style="font-size:18px;">[root@test media]# curl http://192.168.52.254/kbsonlong
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>301 Moved Permanently</title>
</head><body>
<h1>Moved Permanently</h1>
<p>The document has moved <a href="http://192.168.52.254/kbsonlong/">here</a>.</p>
<hr>
<address>Apache/2.2.15 (Red Hat) Server at 192.168.52.254 Port 80</address>
</body></html>
[root@test media]#</span><strong style="font-size:24px;">
</strong>
至此Yum源仓库搭建完成。


三、搭建DNS服务器

1、安装bind服务

<span style="font-size:18px;">[root@test update]# yum -y install bind bind-chroot bind-util bind-libs</span>


2、修改配置文件

1)修改/etc/named.conf配置文件

<span style="font-size:18px;">[root@test ~]# cat /etc/named.conf 
//
// named.conf
//
// Provided by Red Hat bind package to configure the ISC BIND named(8) DNS
// server as a caching only nameserver (as a localhost DNS resolver only).
//
// See /usr/share/doc/bind*/sample/ for example named configuration files.
//

options {
	listen-on port 53 { any; };
	listen-on-v6 port 53 { ::1; };
	directory 	"/var/named";
	dump-file 	"/var/named/data/cache_dump.db";
        statistics-file "/var/named/data/named_stats.txt";
        memstatistics-file "/var/named/data/named_mem_stats.txt";
	allow-query     { any; };
	recursion yes;

	dnssec-enable yes;
	dnssec-validation yes;
	dnssec-lookaside auto;

	/* Path to ISC DLV key */
	bindkeys-file "/etc/named.iscdlv.key";

	managed-keys-directory "/var/named/dynamic";
};

logging {
        channel default_debug {
                file "data/named.run";
                severity dynamic;
        };
};

zone "." IN {
	type hint;
	file "named.ca";
};

include "/etc/named.rfc1912.zones";
include "/etc/named.root.key";</span>

2)修改主配置文件(/etc/named.rfc1912.zones),添加正反向区域

[root@test ~]# cat /etc/named.rfc1912.zones 
// named.rfc1912.zones:
//
// Provided by Red Hat caching-nameserver package 
//
// ISC BIND named zone configuration for zones recommended by
// RFC 1912 section 4.1 : localhost TLDs and address zones
// and http://www.ietf.org/internet-drafts/draft-ietf-dnsop-default-local-zones-02.txt
// (c)2007 R W Franks
// 
// See /usr/share/doc/bind*/sample/ for example named configuration files.
//

zone "localhost.localdomain" IN {
        type master;
        file "named.localhost";
        allow-update { none; };
};

zone "localhost" IN {
        type master;
        file "named.localhost";
        allow-update { none; };
};

zone "1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa" IN {
        type master;
        file "named.loopback";
        allow-update { none; };
};

zone "1.0.0.127.in-addr.arpa" IN {
        type master;
        file "named.loopback";
        allow-update { none; };
};

zone "0.in-addr.arpa" IN {
        type master;
        file "named.empty";
        allow-update { none; };
};

//kbsonlong.com的正向区域
zone "kbsonlong.com" IN {
        type master;
        file "named.kbsonlong.com";
        allow-update { none; };
};
//realhostip.com的反向区域
zone "52.168.192.in-addr.arpa" IN {
        type master;
        file "192.168.52.arpa";
        allow-update { none; };
};


3) 创建正向和反向区域资源文件

[root@test ~]# cd /var/named/
[root@test named]# cat named.kbsonlong.com 
$TTL 1D
@       IN SOA  realhostip.com. rname.invalid. (
                                        0       ; serial
                                        1D      ; refresh
                                        1H      ; retry
                                        1W      ; expire
                                        3H )    ; minimum
        NS      @
        A       127.0.0.1
        AAAA    ::1
gluster01 	IN 	A  192.168.52.12
gluster02 	IN 	A  192.168.52.13
loadbalance01 	IN      A  192.168.52.10
loadbalance02 	IN      A  192.168.52.11
web01 		IN      A  192.168.52.14
web02 		IN      A  192.168.52.15
db01 		IN      A  192.168.52.16
db02		IN      A  192.168.52.17

[root@test named]# cat 192.168.52.arpa 
$TTL 1D
@       IN SOA  realhostip.com. rname.invalid. (
                                        0       ; serial
                                        1D      ; refresh
                                        1H      ; retry
                                        1W      ; expire
                                        3H )    ; minimum
        NS      @
        AAAA    ::1
12      PTR     gluster01.kbsonlong.com.
13      PTR     gluster02.kbsonlong.com.

10 PTR loadbalance01.kbsonlong.com.
11 PTR loadbalance02.kbsonlong.com.
14 PTR web01.kbsonlong.com.
15 PTR web02.kbsonlong.com.
16 PTR db01.kbsonlong.com.
17 PTR db02.kbsonlong.com.


3、安装DNS客户端,并配置DNS地址测试

1)客户端安装bind-utils软件,以便能使用nslookup dig和host工具

[root@Gluster01 ~]# yum install bind-utils –y
[root@Gluster01 ~]# cat /etc/resolv.conf
; generated by /sbin/dhclient-script
search localdomain
nameserver 8.8.8.8
nameserver 192.168.52.254

2)正向解析测试,使用nslookup命令

[root@Gluster01 ~]# nslookup 
> gluster01.kbsonlong.com
Server:		192.168.52.254
Address:	192.168.52.254#53

Name:	gluster01.kbsonlong.com
Address: 192.168.52.12
> gluster02.kbsonlong.com
Server:		192.168.52.254
Address:	192.168.52.254#53

Name:	gluster02.kbsonlong.com
Address: 192.168.52.13
> web01.kbsonlong.com
Server:		192.168.52.254
Address:	192.168.52.254#53

Name:	web01.kbsonlong.com
Address: 192.168.52.14
>

3)反向解析

[root@Gluster01 ~]# nslookup 
> 192.168.52.15
Server:		192.168.52.254
Address:	192.168.52.254#53

15.52.168.192.in-addr.arpa	name = web02.kbsonlong.com.
> 192.168.52.16
Server:		192.168.52.254
Address:	192.168.52.254#53

16.52.168.192.in-addr.arpa	name = db01.kbsonlong.com.
> 192.168.52.17
Server:		192.168.52.254
Address:	192.168.52.254#53

17.52.168.192.in-addr.arpa	name = db02.kbsonlong.com.
> 192.168.52.10
Server:		192.168.52.254
Address:	192.168.52.254#53

10.52.168.192.in-addr.arpa	name = loadbalance01.kbsonlong.com.
> 192.168.52.11
Server:		192.168.52.254
Address:	192.168.52.254#53

11.52.168.192.in-addr.arpa	name = loadbalance02.kbsonlong.com.
>


4) 使用ping命令测试,解析正常

[root@Gluster01 ~]# ping -c 5 gluster01.kbsonlong.com
PING gluster01.kbsonlong.com (192.168.52.12) 56(84) bytes of data.
64 bytes from gluster01.kbsonlong.com (192.168.52.12): icmp_seq=1 ttl=64 time=0.018 ms
64 bytes from gluster01.kbsonlong.com (192.168.52.12): icmp_seq=2 ttl=64 time=0.015 ms
64 bytes from gluster01.kbsonlong.com (192.168.52.12): icmp_seq=3 ttl=64 time=0.029 ms
64 bytes from gluster01.kbsonlong.com (192.168.52.12): icmp_seq=4 ttl=64 time=0.028 ms
64 bytes from gluster01.kbsonlong.com (192.168.52.12): icmp_seq=5 ttl=64 time=0.037 ms

--- gluster01.kbsonlong.com ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 8007ms
rtt min/avg/max/mdev = 0.015/0.025/0.037/0.009 ms
[root@Gluster01 ~]#


四、搭建Glusterfs分布式文件系统

1、配置yum源

(配置yum这一步在每一台服务器均操作,方便安装应用软件)

# vi /etc/yum.repos.d/rhel-debuginfo.repo
[rhel_6_iso]
name=local iso
baseurl=http://192.168.52.254/kbsonlong/media/
gpgcheck=1
gpgkey=http://192.168.52.254/kbsonlong/media/RPM-GPG-KEY-redhat-release

[HighAvailability]
name=HighAvailability
baseurl=http://192.168.52.254/kbsonlong/media/HighAvailability
gpgcheck=1
gpgkey=http://192.168.52.254/kbsonlong/media/RPM-GPG-KEY-redhat-release

[LoadBalancer]
name=LoadBalancer
baseurl=http://192.168.52.254/kbsonlong/media/LoadBalancer
gpgcheck=1
gpgkey=http://192.168.52.254/kbsonlong/media/RPM-GPG-KEY-redhat-release

[ResilientStorage]
name=ResilientStorage
baseurl=http://192.168.52.254/kbsonlong/media/ResilientStorage
gpgcheck=1
gpgkey=http://192.168.52.254/kbsonlong/media/RPM-GPG-KEY-redhat-release

[ScalableFileSystem]
name=ScalableFileSystem
baseurl=http://192.168.52.254/kbsonlong/media/ScalableFileSystem
gpgcheck=1
gpgkey=http://192.168.52.254/kbsonlong/media/RPM-GPG-KEY-redhat-release

[GlusterFS]
name=GlusterFS
baseurl=http://192.168.52.254/kbsonlong/gluster
gpgcheck=0


注:以下两个步骤在Gluster0102均需执行

2、安装Glusterfs分布式系统

# yum install glusterfs-{fuse,server}  -y

3、启动Glusterfs

# service glusterd start


以下操作在任意一台Gluster服务器上操作即可


4、添加Gluster节点,并创建Vol

# gluster peer probe 192.168.52.13   ----(可以是hostname,也可以是ip地址)

#gluster  vol create  vol_web01  repice  2  \
192.168.52.12:/app/kbson/bricke01 192.168.52.13:/app/kbson/bricke01

5、启动Vol,测试挂载

# gluster vol start vol_web01

# mount –t glusterfs 192.168.52.12:vol_web01  /mnt


五、搭建Web服务

1、安装http服务

# yum install httpd -y

2、修改配置文件,并启动服务

# service httpd start



六、搭建负载均衡服务器

1、 安装ipvsadm、keepalived服务

<span style="white-space:pre">	</span># yum install ipvsadm keepalived -y


2、修改keepalived配置文件

2.1、主节点配置

[root@Loadbalance-01 ~]# cat /etc/keepalived/keepalived.conf 
! Configuration File for keepalived

global_defs {
  notification_email {
    591692322@qq.com            #设置报警邮箱,一般不再这做,而是用其他方式报警。
  }
  notification_email_from keepalived@localhost   #设定发送邮件地址
  smtp_server 127.0.0.1                          #设定发送邮件服务器
  smtp_connect_timeout 30                        #设定SMTP连接超时时间
  router_id LVS_DEVEL          #查阅说明文档得知route_id配置是为了标识当前节点,我将其设置为NodeA。当然两个节点的此项设置可相同,也可不相同。
}

vrrp_instance VI_1 {                        #定义虚拟路由实例,不同实例ID不同。
   state MASTER                           #定义服务器在keepalived中的角色主服务器
   interface eth1                          #定义进行检测的端口eth0
   virtual_router_id 51               #定义虚拟路由ID,同一个实例的主从一样。
   priority 100                      #定义在虚拟路由器组的权限,越大越高
   advert_int 1                       #定义检测时间间隔
   authentication {                   #定义认证方式密码,主从必须一样
       auth_type PASS
       auth_pass 1111
   }
   virtual_ipaddress {                #指定虚拟IP地址
       192.168.52.100
   }
}
virtual_server 192.168.52.100 80 {  #定义虚拟服务,需指定IP地址和端口,空格隔开。
   delay_loop 6                      #定义RS运行情况监测时间间隔
   lb_algo lblc                        #定义负载调度算法
   lb_kind DR                        #定义LVS的工作模式
   nat_mask 255.255.255.0            #定义虚拟服务的mask
   persistence_timeout 50            #定义会话保持时间,S为单位
   protocol TCP                      #指定转发协议
   real_server 192.168.52.14 80 {    #定义真实服务器IP地址和端口
       weight 1                      #定义RS的权重
       TCP_CHECK{                    #RS server健康检查部分
       connect_timeout 10            #定义超出10s连接超时
       nb_get_retry 3                #定义重试次数
       delay_before_retry 3          #定义重试时间间隔
       connect_port 80               #定义健康检查端口
   }
   real_server 192.168.52.15 80 {    
       weight 1
       TCP_CHECK{                  
       connect_timeout 10      
       nb_get_retry 3              
       delay_before_retry 3        
       connect_port 80              
   }
}
[root@Loadbalance-01 ~]#

2.2、Backup节点配置

[root@Loadbalance-02 ~]# cat /etc/keepalived/keepalived.conf 
! Configuration File for keepalived

global_defs {
  notification_email {
    592692322@qq.com            #设置报警邮箱,一般不再这做,而是用其他方式报警。
  }
  notification_email_from keepalived@localhost   #设定发送邮件地址
  smtp_server 127.0.0.1                          #设定发送邮件服务器
  smtp_connect_timeout 30                        #设定SMTP连接超时时间
  router_id LVS_DEVEL          #查阅说明文档得知route_id配置是为了标识当前节点,我将其设置为NodeA。当然两个节点的此项设置可相同,也可不相同。
}

vrrp_instance VI_1 {                        #定义虚拟路由实例,不同实例ID不同。
   state BACKUP                           #定义服务器在keepalived中的角色备服务器
   interface eth1                          #定义进行检测的端口eth0
   virtual_router_id 51               #定义虚拟路由ID,同一个实例的主从一样。
   priority 50                      #定义在虚拟路由器组的权限,越大越高
   advert_int 1                       #定义检测时间间隔
   authentication {                   #定义认证方式密码,主从必须一样
       auth_type PASS
       auth_pass 1111
   }
   virtual_ipaddress {                #指定虚拟IP地址
       192.168.52.100
   }
}
virtual_server 192.168.52.100 80 {  #定义虚拟服务,需指定IP地址和端口,空格隔开。
   delay_loop 6                      #定义RS运行情况监测时间间隔
   lb_algo lblc                        #定义负载调度算法
   lb_kind DR                        #定义LVS的工作模式
   nat_mask 255.255.255.0            #定义虚拟服务的mask
   persistence_timeout 50            #定义会话保持时间,S为单位
   protocol TCP                      #指定转发协议
   real_server 192.168.52.14 80 {    #定义真实服务器IP地址和端口
       weight 1                      #定义RS的权重
       TCP_CHECK{                    #RS server健康检查部分
       connect_timeout 10            #定义超出10s连接超时
       nb_get_retry 3                #定义重试次数
       delay_before_retry 3          #定义重试时间间隔
       connect_port 80               #定义健康检查端口
   }
   real_server 192.168.52.15 80 {    
       weight 1
       TCP_CHECK{                  
       connect_timeout 10      
       nb_get_retry 3              
       delay_before_retry 3        
       connect_port 80              
   }
}
[root@Loadbalance-02 ~]#

2.3、客户端配置脚本(即Web01、Web02)

# cat lvs-client.sh
#!/bin/bash
#       591692322@qq.com
#
. /etc/rc.d/init.d/functions
VIP=(
192.168.52.100
)
function start(){
  for ((i=0;i<`echo ${#VIP[*]}`;i++))
      do
       echo ${i}  ${VIP[$i]}
       ifconfig lo:${i} ${VIP[$i]} netmask 255.255.255.255 up
       route add -host ${VIP[$i]} dev lo
       done
echo "1">/proc/sys/net/ipv4/conf/lo/arp_ignore
echo "2">/proc/sys/net/ipv4/conf/lo/arp_announce
echo "1">/proc/sys/net/ipv4/conf/all/arp_announce
echo "2">/proc/sys/net/ipv4/conf/all/arp_announce
}
function  stop(){
  for ((i=0;i<${#VIP[*]};i++))
      do
       echo ${i}  ${VIP[$i]}
       ifconfig lo:${i} ${VIP[$i]} netmask 255.255.255.255 up
       route del -host ${VIP[$i]} dev lo:${i}
       done
}
case "$1" in
   start)
       start
       exit
       ;;
   stop)
       stop
       exit
       ;;
   *)
       echo "You must use $0:stop|start"
       ;;
Esac


七、搭建Mysql集群服务

1、检查安装环境

[root@master ~]# cmake --version
cmake version 3.3.2

CMake suite maintained and supported by Kitware (kitware.com/cmake).
[root@master ~]# java -version
java version "1.8.0_65"
Java(TM) SE Runtime Environment (build 1.8.0_65-b17)
Java HotSpot(TM) 64-Bit Server VM (build 25.65-b01, mixed mode)
[root@master ~]#
注:cmake、jdk必须安装,不然后面无法编译源码安装,但是版本不作要求。

[root@DB02 ~]# rpm -qa|grep ncurses-devel    没有安装会提示curses lib库不存在

2、下载安装包

# wget http://cdn.mysql.com//Downloads/MySQL-Cluster-7.4/mysql-cluster-gpl-7.4.8.tar.gz

3、安装DB01节点

1) 创建mysql软件存放目录

# mkdir /opt/mysql

2) 编译安装mysql

# tar zxvf mysql-cluster-gpl-7.4.8.tar.gz
# cd mysql-cluster-gpl-7.4.8
# cmake -DCMAKE_INSTALL_PREFIX=/opt/mysql \
 -DSYSCONFDIR=/opt/mysql/etc \
 -DMYSQL_DATADIR=/opt/mysql/data \
 -DMYSQL_TCP_PORT=3306 \
 -DMYSQL_UNIX_ADDR=/tmp/mysqld.sock \
 -DWITH_EXTRA_CHARSETS=all \
 -DWITH_SSL=bundled  \
 -DWITH_EMBEDDED_SERVER=1  \
 -DENABLED_LOCAL_INFILE=1  \
 -DWITH_INNOBASE_STORAGE_ENGINE=1  \
 -DDEFAULT_CHARSET=utf8  \
 -DDEFAULT_COLLATION=utf8_general_ci  \
 -DWITH_NDB_JAVA=1  \
 -DWITH_NDBCLUSTER_STORAGE_ENGINE=1
# make&& make install

3) 创建用户,修改文件属性

groupadd mysql
useradd -m -r -g mysql mysql
# cd /opt/mysql
# chown -R mysql.mysql /opt/mysql
# chgrp -R mysql /opt/mysql

4)初始化数据库

# scripts/mysql_install_db --user=mysql --basedir=/opt/mysql/ --datadir=/opt/mysql/data


5)配置数据库

# cp support-files/mysql.server /etc/init.d/mysql
# cp support-files/my-default.cnf  /etc/my.cnf

设置环境变量
vi /etc/profile    --在最后添加如下内容
PATH=/opt/mysql/bin:/opt/mysql/lib:$PATH
export PATH

启动数据库,并修改密码
# service mysql start
# mysqladmin -u root password 'mysql'


6)修改数据库配置文件

[root@DB01 ~]# more /etc/my.cnf |grep -v ^#
[mysqld]
lower_case_table_names=1
sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES 
socket=/tmp/mysqld.sock
ndbcluster  #run NDB storage engine
[mysql_cluster]                      #配置管理节点
ndb-connectstring=192.168.52.16


7)修改管理节点配置文件

[root@DB01 ~]# more /opt/mysql/etc/config.cnf 
[ndbd default]
# Options affecting ndbd processes on all data nodes:
NoOfReplicas=1    # Number of replicas
DataMemory=80M    # How much memory to allocate for data storage
IndexMemory=18M   # How much memory to allocate for index storage
                  # For DataMemory and IndexMemory, we have used the
                  # default values. Since the "world" database takes up
                  # only about 500KB, this should be more than enough for
                  # this example Cluster setup.
[tcp default]
# TCP/IP options:
portnumber=2202   # This the default; however, you can use any
                  # port that is free for all the hosts in the cluster
                  # Note: It is recommended that you do not specify the port
                  # number at all and simply allow the default value to be used
                  # instead
[ndb_mgmd]
# Management process options:
Nodeid=1
hostname=192.168.52.16           # Hostname or IP address of MGM node
datadir=/opt/mysql/log            # Directory for MGM node log files
[ndbd]
# Options for data node "A":
                                  # (one [ndbd] section per data node)
[root@DB01 ~]# more /opt/mysql/etc/config.cnf |grep -v ^#
[ndbd default]
NoOfReplicas=1    # Number of replicas
DataMemory=80M    # How much memory to allocate for data storage
IndexMemory=18M   # How much memory to allocate for index storage
                  # For DataMemory and IndexMemory, we have used the
                  # default values. Since the "world" database takes up
                  # only about 500KB, this should be more than enough for
                  # this example Cluster setup.
[tcp default]
portnumber=2202   # This the default; however, you can use any
                  # port that is free for all the hosts in the cluster
                  # Note: It is recommended that you do not specify the port
                  # number at all and simply allow the default value to be used
                  # instead
[ndb_mgmd]
Nodeid=1                        #管理节点
hostname=192.168.52.16           # Hostname or IP address of MGM node
datadir=/opt/mysql/log            # Directory for MGM node log files
[ndbd]                          #数据节点
hostname=192.168.52.16           # Hostname or IP address
datadir=/data/mysql_ndb           # Directory for this data node's data files
[ndbd]
hostname=192.168.52.17           # Hostname or IP address
datadir=/data/mysql_ndb           # Directory for this data node's data files
[mysqld]                         #SQL节点
hostname=192.168.52.16           # Hostname or IP address
                                # (additional mysqld connections can be
                                # specified for this node for various
                                # purposes such as running ndb_restore)
[mysqld]       
hostname=192.168.52.17

4、安装DB02节点

1) 打包DB01节点mysql软件

--关闭mysql服务
# service mysql.server stop
Shutting down MySQL..... SUCCESS!
# cd /opt
# tar -czvf mysql.tar mysql


2)创建用户并拷贝DB01节点mysql软件

groupadd mysql
useradd -m -r -g mysql mysql
# scp 192.168.52.16:/opt/mysql.tar /opt/
The authenticity of host '192.168.52.16 (192.168.52.10)' can't be established.
RSA key fingerprint is 68:9c:59:fd:c6:71:8a:ce:f3:06:e5:13:48:5a:55:fb.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.52.16' (RSA) to the list of known hosts.
root@192.168.52.16's password:

3)解压mysql.tar包

# tar -xzvf mysql.tar –C /opt/mysql


4)配置数据库

# cp support-files/mysql.server /etc/init.d/mysql
设置环境变量:
在/etc/profile文件最后增加:
PATH=/opt/mysql/bin:/opt/mysql/lib:$PATH
export PATH

5)修改数据库配置文件

[root@DB02 ~]# more /etc/my.cnf 
[mysqld]
ndbcluster  #run NDB storage engine
[mysql_cluster]
ndb-connectstring=192.168.52.16
[root@DB02 ~]#


5、Mysql Cluster集群启动

1)启动master管理节点

[root@DB01 etc]# /opt/mysql/bin/ndb_mgmd -f /opt/mysql/etc/config.cnf 
MySQL Cluster Management Server mysql-5.6.27 ndb-7.4.8


2)启动数据节点(Data Node)

[root@DB01 ~]# ndbd
2015-12-08 03:54:45 [ndbd] INFO     -- Angel connected to '192.168.52.16:1186'
2015-12-08 03:54:45 [ndbd] INFO     -- Angel allocated nodeid: 2
[root@DB02 ~]# ndbd
2015-12-08 03:55:23 [ndbd] INFO     -- Angel connected to '192.168.52.16:1186'
2015-12-08 03:55:23 [ndbd] INFO     -- Angel allocated nodeid: 3


3)启动Sql节点

# service mysql start

4)使用ndb_mgm工具管理

[root@DB01 ~]# /opt/mysql/bin/ndb_mgm
-- NDB Cluster -- Management Client --
ndb_mgm> show
Connected to Management Server at: 192.168.52.16:1186
Cluster Configuration
---------------------
[ndbd(NDB)]	2 node(s)
id=2	@192.168.52.16  (mysql-5.6.27 ndb-7.4.8, Nodegroup: 0, *)
id=3	@192.168.52.17  (mysql-5.6.27 ndb-7.4.8, Nodegroup: 1)

[ndb_mgmd(MGM)]	1 node(s)
id=1	@192.168.52.16  (mysql-5.6.27 ndb-7.4.8)

[mysqld(API)]	2 node(s)
id=4	@192.168.52.16  (mysql-5.6.27 ndb-7.4.8)
id=5	@192.168.52.17  (mysql-5.6.27 ndb-7.4.8)

ndb_mgm>

由上面信息可以看到2个NDB数据节点,1个MGM管理节点,2个SQL节点都已经正常连接


5)停止DB01服务器mysql服务,查看连接情况

[root@DB01 ~]# service mysql stop
Shutting down MySQL.... SUCCESS!
[root@DB01 ~]# /opt/mysql/bin/ndb_mgm
-- NDB Cluster -- Management Client --
ndb_mgm> show
Cluster Configuration
---------------------
[ndbd(NDB)]	2 node(s)
id=2	@192.168.52.16  (mysql-5.6.27 ndb-7.4.8, Nodegroup: 0, *)
id=3	@192.168.52.17  (mysql-5.6.27 ndb-7.4.8, Nodegroup: 1)

[ndb_mgmd(MGM)]	1 node(s)
id=1	@192.168.52.16  (mysql-5.6.27 ndb-7.4.8)

[mysqld(API)]	2 node(s)
<span style="color:#ff0000;">id=4 (not connected, accepting connect from 192.168.52.16)</span>
id=5	@192.168.52.17  (mysql-5.6.27 ndb-7.4.8)
由上面信息可以看到id=4的SQL节点没有连接







评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值