kafka

kafka概述

Kafka是最初由Linkedin公司开发,是一个分布式、支持分区的(partition)、多副本的(replica),基于zookeeper协调的分布式消息系统,它的最大的特性就是可以实时的处理大量数据以满足各种需求场景:比如基于hadoop的批处理系统、低延迟的实时系统、storm/Spark流式处理引擎,web/nginx日志、访问日志,消息服务等等,用scala语言编写,Linkedin于2010年贡献给了Apache基金会并成为顶级开源 项目。

当今社会各种应用系统诸如商业、社交、搜索、浏览等像信息工厂一样不断的生产出各种信息,在大数据时代,我们面临如下几个挑战:

(1)如何收集这些巨大的信息

(2)如何分析它

(3)如何及时做到如上两点

以上几个挑战形成了一个业务需求模型,即生产者生产(produce)各种信息,消费者消费(consume)(处理分析)这些信息,而在生产者与消费者之间,需要一个沟通两者的桥梁-消息系统。从一个微观层面来说,这种需求也可理解为不同的系统之间如何传递消息。

消息队列通信模式

点对点模式

点对点模式通常是基于拉取或者轮询的消息传送模型,这个模型的特点是发送到队列的消息被一个且只有一个消费者进行处理。生产者将消息放入消息队列后,由消费者主动的去拉取消息进行消费。点对点模型的的优点是消费者拉取消息的频率可以由自己控制。但是消息队列是否有消息需要消费,在消费者端无法感知,所以在消费者端需要额外的线程去监控。

发布订阅模式

发布订阅模式是一个基于消息传送模型,模型可以有多种不同的订阅者。生产者将消息放入消息队列后,队列会将消息推送给订阅过该类消息的消费者(类似微信公众号)。由于是消费者被动接收推送,所以无需感知消息队列是否有待消费的消息!但是consumer1、consumer2、consumer3由于机器性能不一样,所以处理消息的能力也会不一样,但消息队列却无法感知消费者消费的速度!所以推送的速度成了发布订阅模模式的一个问题!假设三个消费者处理速度分别是8M/s、5M/s、2M/s,如果队列推送的速度为5M/s,则consumer3无法承受!如果队列推送的速度为2M/s,则consumer1、consumer2会出现资源的极大浪费! 

kafka的架构原理

Kafka是一种高吞吐量的分布式发布订阅消息系统,它可以处理消费者规模的网站中的所有动作流数据,具有高性能、持久化、多副本备份、横向扩展能力

  1. Producer:Producer即生产者,消息的产生者,是消息的入口。

Broker:Broker是kafka实例,每个服务器上有一个或多个kafka的实例,我们姑且认为每个broker对应一台服务器。每个kafka集群内的broker都有一个不重复的编号,如图中的broker-0、broker-1等……

Topic:消息的主题,可以理解为消息的分类,kafka的数据就保存在topic。在每个broker上都可以创建多个topic。

Partition:Topic的分区,每个topic可以有多个分区,分区的作用是做负载,提高kafka的吞吐量。同一个topic在不同的分区的数据是不重复的,partition的表现形式就是一个一个的文件夹!

Replication:每一个分区都有多个副本,副本的作用是做备胎。当主分区(Leader)故障的时候会选择一个备胎(Follower)上位,成为Leader。在kafka中默认副本的最大数量是10个,且副本的数量不能大于Broker的数量,follower和leader绝对是在不同的机器,同一机器对同一个分区也只可能存放一个副本(包括自己)。

Message:每一条发送的消息主体。

Consumer:消费者,即消息的消费方,是消息的出口。

Consumer Group:我们可以将多个消费组组成一个消费者组,在kafka的设计中同一个分区的数据只能被消费者组中的某一个消费者消费。同一个消费者组的消费者可以消费同一个topic的不同分区的数据,这也是为了提高kafka的吞吐量!

Zookeeper:kafka集群依赖zookeeper来保存集群的的元信息,来保证系统的可用性。

消息写入leader后,follower是主动的去leader进行同步的!producer采用push模式将数据发布到broker,每条消息追加到分区中,顺序写入磁盘,所以保证同一分区内的数据是有序的!

部署zookeeper集群 

准备三台主机

192.168.100.10       zookeeper1

192.168.100.20       zookeeper2

192.168.100.30       zookeeper3

关闭防火墙,selinux,设置时钟同步

安装java

三台主机都mkdir /opt/software

将下载好的java包上传到服务器进行操作

[root@zookeeper1 software]# ls
jdk-8u181-linux-x64.tar.gz

解压
[root@zookeeper1 software]# tar -zxvf jdk-8u181-linux-x64.tar.gz

修改配置文件
[root@zookeeper1 software]# vim /etc/profile
export JAVA_HOME=/opt/software/jdk1.8.0_181
export PATH=$JAVA_HOME/bin:$PATH
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar

 将java目录和/etc/profile都发送到zookeeper2和zookeeper3

[root@zookeeper1 software]# scp  -r  jdk1.8.0_181/ root@192.168.100.20:/opt/software/
[root@zookeeper1 software]# scp  -r  jdk1.8.0_181/ root@192.168.100.30:/opt/software/
[root@zookeeper1 software]# scp /etc/profile root@192.168.100.20:/etc/profile
[root@zookeeper1 software]# scp /etc/profile root@192.168.100.30:/etc/profile

//三台主机全部都source一下/etc/profile
[root@zookeeper1 ~]# source /etc/profile
[root@zookeeper2 ~]# source /etc/profile
[root@zookeeper3 ~]# source /etc/profile

安装zookeeper

将下载好的zookeeper的包传到主机上
[root@zookeeper1 software]# rz -E
rz waiting to receive.
[root@zookeeper1 software]# ls
jdk1.8.0_181  jdk-8u181-linux-x64.tar.gz  zookeeper-3.4.8.tar.gz


解压zookeeper软件包
//解压zookeeper软件包
[root@zookeeper1 software]# tar -zxvf zookeeper-3.4.8.tar.gz
[root@zookeeper1 software]# mv zookeeper-3.4.8 zookeeper
[root@zookeeper1 software]# cd zookeeper/
[root@zookeeper1 zookeeper]# mkdir data logs
[root@zookeeper1 zookeeper]# cd conf/
[root@zookeeper1 conf]# cp zoo_sample.cfg zoo.cfg
[root@zookeeper1 conf]# vim zoo.cfg


//修改 dataDir 参数内容如下: 
dataDir=/opt/software/zookeeper/data


//在文档最末尾填写如下几行
server.1=192.168.100.10:2888:3888
server.2=192.168.100.20:2888:3888
server.3=192.168.100.30:2888:3888


//在每个节点写入该节点的标识编号,每个节点编号不同,zookeeper1写入 1, zookeeper2写入 2,zookeeper3写入 3
[root@zookeeper1 conf]# echo 1 > /opt/software/zookeeper/data/myid

//将zookeeper发送给另外两台主机
[root@zookeeper1 software]# scp -r zookeeper/ root@192.168.100.20:/opt/software/
[root@zookeeper1 software]# scp -r zookeeper/ root@192.168.100.30:/opt/software/

//在zookeeper2中修改
[root@zookeeper2 ~]# echo 2 > /opt/software/zookeeper/data/myid
//在zookeeper3中修改
[root@zookeeper3 ~]# echo 3 > /opt/software/zookeeper/data/myid


//配置zookeeper的环境变量
[root@zookeeper1 ~]# vim /etc/profile
export JAVA_HOME=/opt/software/jdk1.8.0_181
export PATH=$JAVA_HOME/bin:$PATH
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export ZOOKEEPER_HOME=/opt/software/zookeeper
export PATH=$PATH:$ZOOKEEPER_HOME/bin

//将/etc/profile文件发送给另外两台主机
[root@zookeeper1 software]# scp /etc/profile root@192.168.100.20:/etc/profile
[root@zookeeper1 software]# scp /etc/profile root@192.168.100.30:/etc/profile

//三台主机全部都source一下/etc/profile
[root@zookeeper1 ~]# source /etc/profile
[root@zookeeper2 ~]# source /etc/profile
[root@zookeeper3 ~]# source /etc/profile

//三台主机都启动zookeeper
[root@zookeeper1 ~]# zkServer.sh start
[root@zookeeper2 ~]# zkServer.sh start
[root@zookeeper3 ~]# zkServer.sh start

//三台主机都查询一下zookeeper的状态
[root@zookeeper1 ~]# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/software/zookeeper/bin/../conf/zoo.cfg
Mode: follower

[root@zookeeper2 ~]# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/software/zookeeper/bin/../conf/zoo.cfg
Mode: follower

[root@zookeeper3 ~]# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/software/zookeeper/bin/../conf/zoo.cfg
Mode: leader


安装kafka

将下载好的kafka包上传
[root@zookeeper1 ~]# rz -E
rz waiting to receive.
[root@zookeeper1 ~]# ls
anaconda-ks.cfg  kafka_2.11-2.4.0.tgz


//解压kafka
[root@zookeeper1 ~]# tar -zxvf kafka_2.11-2.4.0.tgz

//编辑kafka的配置文件
[root@zookeeper1 ~]# vim kafka_2.11-2.4.0/config/server.properties
在配置文件中找到以下两行并注释掉(在文本前加#)如下所示:
#broker.id=0
#zookeeper.connect=localhost:2181
添加
broker.id=1
zookeeper.connect=192.168.100.10:2181,192.168.100.20:2181,192.168.100.30:2181
listeners = PLAINTEXT://192.168.100.10:9092

//将kafka发送给另外两台主机
[root@zookeeper1 ~]# scp -r kafka_2.11-2.4.0/ root@192.168.100.20:/root/
[root@zookeeper1 ~]# scp -r kafka_2.11-2.4.0/ root@192.168.100.30:/root/

//修改zookeeper2主机的kafka配置文件
[root@zookeeper2 ~]# vim kafka_2.11-2.4.0/config/server.properties
broker.id=2
zookeeper.connect=192.168.100.10:2181,192.168.100.20:2181,192.168.100.30:2181
listeners = PLAINTEXT://192.168.200.20:9092

//修改zookeeper3主机的kafka配置文件
[root@zookeeper3 ~]# vim kafka_2.11-2.4.0/config/server.properties
broker.id=3
zookeeper.connect=192.168.100.10:2181,192.168.100.20:2181,192.168.100.30:2181
listeners = PLAINTEXT://192.168.200.30:9092

//三台主机全部启动kafka
[root@zookeeper1 ~]# ./kafka_2.11-2.4.0/bin/kafka-server-start.sh -daemon ./kafka_2.11-2.4.0/config/server.properties

[root@zookeeper2 ~]# ./kafka_2.11-2.4.0/bin/kafka-server-start.sh -daemon ./kafka_2.11-2.4.0/config/server.properties

[root@zookeeper3 ~]# ./kafka_2.11-2.4.0/bin/kafka-server-start.sh -daemon ./kafka_2.11-2.4.0/config/server.properties

//jps查看
[root@zookeeper1 ~]# jps
2770 Jps
2024 Kafka
1565 QuorumPeerMain

[root@zookeeper2 ~]# jps
1905 Kafka
2339 Jps
1452 QuorumPeerMain

[root@zookeeper3 ~]# jps
1155 QuorumPeerMain
1704 Kafka
2172 Jps

测试kafka
[root@zookeeper1 ~]# ./kafka_2.11-2.4.0/bin/kafka-topics.sh --create --zookeeper 192.168.100.10:2181 --replication-factor 1 --partitions 1 --topic test

[root@zookeeper2 ~]# ./kafka_2.11-2.4.0/bin/kafka-topics.sh  --list --zookeeper 192.168.100.20:2181
test

[root@zookeeper3 ~]# ./kafka_2.11-2.4.0/bin/kafka-topics.sh  --list --zookeeper 192.168.100.30:2181
test

 gpmall应用商城系统

部署前需要配置

四台主机,centos7,

192.168.100.10

192.168.100.20

192.168.100.30

192.168.100.40

时钟同步要同步,firewalld,selinux都需要关闭

修改主机名

1、mycat修改主机名:
hostnamectl set-hostname mycat

2、db1修改主机名:
hostnamectl set-hostname db1

3、db2修改主机名:
hostnamectl set-hostname db2

 修改yum源

四台主机都需要修改,步骤一样

找到阿里源yum仓库
[root@mycat ~]# rm -rf /etc/yum.repos.d/*
[root@mycat ~]# wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo--2024-08-21 18:15:36--  https://mirrors.aliyun.com/repo/Centos-7.repo
正在解析主机 mirrors.aliyun.com (mirrors.aliyun.com)... 116.207.154.24, 116.207.154.27, 116.207.154.28
正在连接 mirrors.aliyun.com (mirrors.aliyun.com)|116.207.154.24|:443... 已连接。
已发出 HTTP 请求,正在等待回应... 200 OK
长度:2523 (2.5K) [application/octet-stream]
正在保存至: “/etc/yum.repos.d/CentOS-Base.repo”

100%[=============================================================>] 2,523       --.-K/s 用时 0s      

2024-08-21 18:15:37 (41.3 MB/s) - 已保存 “/etc/yum.repos.d/CentOS-Base.repo” [2523/2523])

[root@mycat ~]# ls /etc/yum.repos.d/
CentOS-Base.repo


安装epel源,四台主机操作一样

[root@mycat ~]# yum -y install epel-release

[root@mycat ~]# ls /etc/yum.repos.d/
CentOS-Base.repo  epel.repo  epel-testing.repo

绑定ip

[root@mycat ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.100.10	mycat
192.168.100.20	db1
192.168.100.30	db2


mycat配置免密钥
[root@mycat~ yum.repos.d]# ssh-keygen 
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): '^H
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in .
Your public key has been saved in .pub.
The key fingerprint is:
SHA256:vsSL/DohEGi94ucycnYxP7E/UUiHPlvyyrnUttu+gJI root@db1
The key's randomart image is:
+---[RSA 2048]----+
| ..      .       |
|....    o .      |
|.  ..  o o       |
| ...    = o      |
|. ..    SB       |
| . .+ o+oo.      |
|  o  =E=*++      |
|.oo...===+ +     |
|.oo.  +**ooo+.   |
+----[SHA256]-----+
传给db1
[root@mycat ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.100.20
db2
[root@mycat ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.100.30

db1配置免密钥

[root@db1 ~]# ssh-keygen
ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.100.10
ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.100.30

db2配置免密钥

[root@db1 ~]# ssh-keygen
ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.100.10
ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.100.20

将mycat下的/etc/hosts传到两台主机下
[root@mycat ~]# scp /etc/hosts root@192.168.100.20:/etc/hosts
hosts                                                                100%  217    27.4KB/s   00:00    
[root@mycat ~]# scp /etc/hosts root@192.168.100.30:/etc/hosts
hosts                                                                100%  217   229.5KB/s   00:00    
[root@mycat ~]# 

 将node4搭建成web站点

[root@node4 ~]# yum -y install httpd

通过filezilla将gpmall-repo传输上来

[root@node4 project4]# pwd
/var/www/html/project4
[root@node4 project4]# ls
gpmall-repo
[root@node4 project4]# 

回到mycat中,手写一个yum源

[root@mycat ~]# cd /etc/yum.repos.d/
[root@mycat yum.repos.d]# vim CentOS-Base.repo 
 写到最末尾
[mariadb]
name=mariadb
baseurl=http://192.168.100.40/project4/gpmall-repo
enabled=1
gpgcheck=0


传到db1,db2中
[root@mycat yum.repos.d]# scp CentOS-Base.repo root@192.168.100.20:/etc/yum.repos.d/
CentOS-Base.repo                                                     100% 2619     2.6MB/s   00:00    

[root@mycat yum.repos.d]# scp CentOS-Base.repo root@192.168.100.30:/etc/yum.repos.d/
CentOS-Base.repo                                                     100% 2619     3.3MB/s   00:00 

回到node4启动httpd服务

[root@node4 ~]# systemctl restart httpd
[root@node4 ~]# systemctl enable httpd

在四台主机上安装java环境,后面会用到

[root@mycat ~]# yum -y install java java-devel
[root@db1 ~]# yum -y install java java-devel
[root@db2 ~]# yum -y install java java-devel
[root@node4 ~]#  yum -y install java java-devel


[root@mycat ~]# java -version
openjdk version "1.8.0_412"
OpenJDK Runtime Environment (build 1.8.0_412-b08)
OpenJDK 64-Bit Server VM (build 25.412-b08, mixed mode)



[root@db1 ~]# java -version
openjdk version "1.8.0_412"
OpenJDK Runtime Environment (build 1.8.0_412-b08)
OpenJDK 64-Bit Server VM (build 25.412-b08, mixed mode)


[root@db2 ~]# java -version
openjdk version "1.8.0_412"
OpenJDK Runtime Environment (build 1.8.0_412-b08)
OpenJDK 64-Bit Server VM (build 25.412-b08, mixed mode)


openjdk version "1.8.0_412"
OpenJDK Runtime Environment (build 1.8.0_412-b08)
OpenJDK 64-Bit Server VM (build 25.412-b08, mixed mode)

主从部分

在db1和db2上均需要安装mariadb
 

yum install -y mariadb mariadb-server
systemctl start mariadb;systemctl enable mariadb

mysql初始化

db1和db2
mysql_secure_installation

回车
y
输入密码(自己定义密码)
再输入一次(自己定义密码)
y
n
y
y

配置db1数据库配置文件

db1

vim /etc/my.cnf
添加一下内容
[mysqld]
log_bin=mysql-bin
binlog_ignore_db=mysql
server_id=20                        // id号不一样,要小于从库
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
symbolic-links=0

[mysqld_safe]
log-error=/var/log/mariadb/mariadb.log
pid-file=/var/run/mariadb/mariadb.pid


db2

vim /etc/my.cnf
添加一下内容
[mysqld]
log_bin=mysql-bin
binlog_ignore_db=mysql
server_id=30                                // 大于主库的id号
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
symbolic-links=0

[mysqld_safe]
log-error=/var/log/mariadb/mariadb.log
pid-file=/var/run/mariadb/mariadb.pid

修改完成以后都需要重启服务
systemctl restart mariadb


在db1中登录到数据库进行权限配置
msyql -uroot -plinux
grant all privileges on *.* to root@'%' identified by "linux";
grant replication slave on *.* to 'user'@'db2' identified by 'linux';
flush privileges;                    // 刷新


配置db2同步db1
mysql -uroot -p123456
grant all privileges on *.* to root@'%' identified by "linux";
change master to master_host='db1',master_user='user',master_password='linux';
start slave;
show slave status\G;
此时可以看到
Slave_IO_Running: Yes
Slave_SQL_Running: Yes

此时主从搭建完成

测试

验证主从数据库的同步功能,在db1上创建数据库test,创建表company,插入表数据,查看company数据

主库
MariaDB [(none)]> create database test;
Query OK, 1 row affected (0.001 sec)

MariaDB [(none)]> use test;
Database changed
MariaDB [test]> create table company(id int not null primary key,name varchar(50),addr varchar(255));
Query OK, 0 rows affected (0.005 sec)

MariaDB [test]> insert into company values(1,"facebook","usa");
Query OK, 1 row affected (0.001 sec)

MariaDB [test]> select * from company;
+----+----------+------+
| id | name     | addr |
+----+----------+------+
|  1 | facebook | usa  |
+----+----------+------+
1 row in set (0.000 sec)

此时去查看从库

MariaDB [test]> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| test               |
+--------------------+
4 rows in set (0.000 sec)

MariaDB [test]> select * from test.company;
+----+----------+------+
| id | name     | addr |
+----+----------+------+
|  1 | facebook | usa  |
+----+----------+------+
1 row in set (0.000 sec)

MariaDB [test]> 

同步成功


读写分离部分

部署mycat服务,将Mycat上传到mycat主机上,解压到/usr/local

tar -zxvf Mycat-server-1.6-RELEASE-20161028204710-linux.tar.gz -C /usr/local/

设置环境变量
echo export MYCAT_HOME=/usr/local/mycat/ >> /etc/profile

source一下

source /etc/profile
检查
[root@mycat ~]# echo $MYCAT_HOME
/usr/local/mycat/

编辑mycat服务读写分离的schema.xml,设置数据库写入节点为db1,读取节点db2,注意IP要修改

vi ../usr/local/mycat/conf/schema.xml        // 删除里面全部所有内容,插入下面的内容

<?xml version="1.0"?>
<!DOCTYPE mycat:schema SYSTEM "schema.dtd">
<mycat:schema xmlns:mycat="http://io.mycat/">
<schema name="USERDB" checkSQLschema="true" sqlMaxLimit="100" dataNode="dn1"></schema> 
<dataNode name="dn1" dataHost="localhost1" database="test" />  
<dataHost name="localhost1" maxCon="1000" minCon="10" balance="3" dbType="mysql" dbDriver="native" writeType="0" switchType="1"  slaveThreshold="100">  
    <heartbeat>select user()</heartbeat>
    <writeHost host="hostM1" url="192.168.100.20:3306" user="root" password="linux">
        <readHost host="hostS1" url="192.168.100.30:3306" user="root" password="linux" />
    </writeHost>
</dataHost>
</mycat:schema>
 vim /usr/local/mycat/conf/server.xml

删除如下5行,在文件最末尾处
<user name="root">
		<property name="password">user</property>
		<property name="schemas">TESTDB</property>
		<property name="readOnly">true</property>
 
</user>
 
 
然后需要修改一处地方-----改成USERDB
<user name="root">
		<property name="password">linux</property>
		<property name="schemas">USERDB</property>

在mycat上修改schema.xml的用户权限

chown root:root ../usr/local/mycat/conf/schema.xml

启动mycat服务

/bin/bash /usr/local/mycat/bin/mycat start

启动后查看端口
查看8066、9066端口是否开启,如果有开放8066和9066端口,则表示mycat服务开启成功mycat服务默认的数据端口是8066,而9066端口则是mycat管理端口,用于管理mycat的整个集群状态)

测试

[root@mycat ~]# yum install -y MariaDB-client

MySQL [(none)]> show databases;
+----------+
| DATABASE |
+----------+
| USERDB   |
+----------+
1 row in set (0.002 sec)

MySQL [(none)]> use USERDB
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
MySQL [USERDB]> show tables;
+----------------+
| Tables_in_test |
+----------------+
| company        |
+----------------+
1 row in set (0.001 sec)

MySQL [USERDB]> select * from company;
+----+----------+------+
| id | name     | addr |
+----+----------+------+
|  1 | facebook | usa  |
+----+----------+------+
1 row in set (0.030 sec)

验证mycat服务对数据库读写操作分离

查看读写分离的情况

[root@mycat ~]# mysql -h127.0.0.1 -P9066 -uroot -plinux -e 'show @@datasource;'
+----------+--------+-------+----------------+------+------+--------+------+------+---------+-----------+------------+
| DATANODE | NAME   | TYPE  | HOST           | PORT | W/R  | ACTIVE | IDLE | SIZE | EXECUTE | READ_LOAD | WRITE_LOAD |
+----------+--------+-------+----------------+------+------+--------+------+------+---------+-----------+------------+
| dn1      | hostM1 | mysql | 192.168.100.20 | 3306 | W    |      0 |   10 | 1000 |      35 |         0 |          0 |
| dn1      | hostS1 | mysql | 192.168.100.30 | 3306 | R    |      0 |    4 | 1000 |      31 |         3 |          0 |
+----------+--------+-------+----------------+------+------+--------+------+------+---------+----------

gpmall集群

消息中间件集群zookeeper,kafka

[root@mycat ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.100.10	mycat
192.168.100.20	db1
192.168.100.30	db2
192.168.100.10    zookeeper1
192.168.100.20    zookeeper2
192.168.100.30    zookeeper3


scp拷贝到另外两台计算机
scp /etc/hosts root@192.168.100.20:/etc/hosts
scp /etc/hosts root@192.168.100.30:/etc/hosts

3个节点安装Java JDK环境

前面已经安装过了

 继续在mycat主机上配置

将提前下载好的zookeeper安装到mycat上
[root@mycat ~]# rz -E
rz waiting to receive.
[root@mycat ~]# tar -zxvf zookeeper-3.4.8.tar.gz 
修改配置文件
[root@mycat ~]# cd zookeeper-3.4.8/conf/
[root@mycat conf]# mv zoo_sample.cfg zoo.cfg
[root@mycat conf]# vim zoo.cfg 
server.1=192.168.100.10:2888:3888
server.2=192.168.100.20:2888:3888
server.3=192.168.100.30:2888:3888

同步到三台机器上

在zookeeper1上创建
mkdir /tmp/zookeeper;touch /tmp/zookeeper/myid
echo 1 > /tmp/zookeeper/myid

[root@mycat ~]# scp -r zookeeper-3.4.8 root@192.168.100.20:/root
[root@mycat ~]# scp -r zookeeper-3.4.8 root@192.168.100.30:/root

修改其他两台myid号
zookeeper2

[root@db1 conf]# echo 2 > /tmp/zookeeper/myid
[root@db1 conf]# cat /tmp/zookeeper/myid
2

zookeeper3

[root@db1 conf]# echo 2 > /tmp/zookeeper/myid
[root@db1 conf]# cat /tmp/zookeeper/myid
3

三台主机启动zookeeper

./zookeeper-3.4.8//bin/zkServer.sh start


查看三台主机的状态
1
[root@mycat ~]# ./zookeeper-3.4.8//bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /root/zookeeper-3.4.8/bin/../conf/zoo.cfg
Mode: follower

2
[root@db1 ~]# ./zookeeper-3.4.8/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /root/zookeeper-3.4.8/bin/../conf/zoo.cfg
Mode: leader

3
[root@db2 ~]# ./zookeeper-3.4.8/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /root/zookeeper-3.4.8/bin/../conf/zoo.cfg
Mode: follower

Kafka集群部署

将下载好的kafka包上传
[root@zookeeper1 ~]# rz -E
rz waiting to receive.
[root@zookeeper1 ~]# ls
anaconda-ks.cfg  kafka_2.11-2.4.0.tgz


//解压kafka
[root@zookeeper1 ~]# tar -zxvf kafka_2.11-2.4.0.tgz

//编辑kafka的配置文件
[root@zookeeper1 ~]# vim kafka_2.11-2.4.0/config/server.properties
在配置文件中找到以下两行并注释掉(在文本前加#)如下所示:
#broker.id=0
#zookeeper.connect=localhost:2181
添加
broker.id=1
zookeeper.connect=192.168.100.10:2181,192.168.100.20:2181,192.168.100.30:2181
listeners = PLAINTEXT://192.168.100.10:9092

//将kafka发送给另外两台主机
[root@zookeeper1 ~]# scp -r kafka_2.11-2.4.0/ root@192.168.100.20:/root/
[root@zookeeper1 ~]# scp -r kafka_2.11-2.4.0/ root@192.168.100.30:/root/

//修改zookeeper2主机的kafka配置文件
[root@zookeeper2 ~]# vim kafka_2.11-2.4.0/config/server.properties
broker.id=2
zookeeper.connect=192.168.100.10:2181,192.168.100.20:2181,192.168.100.30:2181
listeners = PLAINTEXT://192.168.200.20:9092

//修改zookeeper3主机的kafka配置文件
[root@zookeeper3 ~]# vim kafka_2.11-2.4.0/config/server.properties
broker.id=3
zookeeper.connect=192.168.100.10:2181,192.168.100.20:2181,192.168.100.30:2181
listeners = PLAINTEXT://192.168.200.30:9092

//三台主机全部启动kafka
[root@zookeeper1 ~]# ./kafka_2.11-2.4.0/bin/kafka-server-start.sh -daemon ./kafka_2.11-2.4.0/config/server.properties

[root@zookeeper2 ~]# ./kafka_2.11-2.4.0/bin/kafka-server-start.sh -daemon ./kafka_2.11-2.4.0/config/server.properties

[root@zookeeper3 ~]# ./kafka_2.11-2.4.0/bin/kafka-server-start.sh -daemon ./kafka_2.11-2.4.0/config/server.properties

//jps查看
[root@zookeeper1 ~]# jps
2770 Jps
2024 Kafka
1565 QuorumPeerMain

[root@zookeeper2 ~]# jps
1905 Kafka
2339 Jps
1452 QuorumPeerMain

[root@zookeeper3 ~]# jps
1155 QuorumPeerMain
1704 Kafka
2172 Jps

测试kafka
[root@zookeeper1 ~]# ./kafka_2.11-2.4.0/bin/kafka-topics.sh --create --zookeeper 192.168.100.10:2181 --replication-factor 1 --partitions 1 --topic test

[root@zookeeper2 ~]# ./kafka_2.11-2.4.0/bin/kafka-topics.sh  --list --zookeeper 192.168.100.20:2181
test

[root@zookeeper3 ~]# ./kafka_2.11-2.4.0/bin/kafka-topics.sh  --list --zookeeper 192.168.100.30:2181
test

应用集群nginx

数据库的导入,db1

将gpmall.sql上传到db1中,导入到mysql中

mysql -uroot -p123456
create database gpmall;
use gpmall;
source /root/gpmall.sql;

MariaDB [gpmall]> show tables;
+--------------------+
| Tables_in_gpmall   |
+--------------------+
| tb_address         |
| tb_base            |
| tb_comment         |
| tb_comment_picture |
| tb_comment_reply   |
| tb_dict            |
| tb_express         |
| tb_item            |
| tb_item_cat        |
| tb_item_desc       |
| tb_log             |
| tb_member          |
| tb_order           |
| tb_order_item      |
| tb_order_shipping  |
| tb_panel           |
| tb_panel_content   |
| tb_payment         |
| tb_refund          |
| tb_stock           |
| tb_user_verify     |
+--------------------+
21 rows in set (0.000 sec)

修改mycat配置文件

vi /usr/local/mycat/conf/schema.xml
修改表名gpmall,gpmall 
vi /usr/local/mycat/conf/server.xml    
修改表名gpmall

重启
/usr/local/mycat/bin/mycat restart

redis

缓存服务器redis
vi /etc/hosts
192.168.200.10    mycat
192.168.200.20    db1
192.168.200.30    db2
192.168.200.10    zookeeper1
192.168.200.20    zookeeper2
192.168.200.30    zookeeper3
192.168.200.10    redis

[root@mycat ~]# scp /etc/hosts root@192.168.100.20:/etc/hosts
hosts                                                                100%  325   444.2KB/s   00:00    
[root@mycat ~]# scp /etc/hosts root@192.168.100.30:/etc/hosts
hosts                                                                100%  325   406.1KB/s   00:00    

修改redis配置文件
[root@mycat ~]# yum -y install redis

vi /etc/redis.conf

#bind 127.0.0.1                     注释
protect-mode yes-->no


启动redis
[root@mycat ~]# systemctl restart redis
[root@mycat ~]# systemctl enable redis
Created symlink from /etc/systemd/system/multi-user.target.wants/redis.service to /usr/lib/systemd/system/redis.service.

检查端口
[root@mycat ~]# ss -anlt | grep 6379
LISTEN     0      128          *:6379                     *:*                  
LISTEN     0      128         :::6379                    :::*                  
[root@mycat ~]# 

集群应用系统

1、vi /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.100.10  mycat
192.168.100.20  db1
192.168.100.30  db2
192.168.100.10    zookeeper1
192.168.100.20    zookeeper2
192.168.100.30    zookeeper3
192.168.100.10  redis
192.168.100.10    mysql.mall
192.168.100.10    zk1.mall
192.168.100.20    zk2.mall
192.168.100.30    zk3.mall
192.168.100.10    kafka1.mall
192.168.100.20    kafka2.mall
192.168.100.30    kafka3.mall
192.168.100.10    redis.mall
~                             

传到db1,db2中
[root@mycat ~]# scp /etc/hosts root@192.168.100.20:/etc/hosts
hosts                                                                100%  554   880.2KB/s   00:00    
[root@mycat ~]# scp /etc/hosts root@192.168.100.30:/etc/hosts
hosts                                                                100%  554   632.8KB/s   00:00

启动java包

四个java包也是传上来的,db1,db2都需要

按顺序启动
nohup java -jar user-provider-0.0.1-SNAPSHOT.jar &
nohup java -jar shopping-provider-0.0.1-SNAPSHOT.jar &
nohup java -jar gpmall-user-0.0.1-SNAPSHOT.jar &
nohup java -jar gpmall-shopping-0.0.1-SNAPSHOT.jar &

使用jobs-l查看
[root@db1 ~]# jobs -l
[1]  22226 运行中               nohup java -jar user-provider-0.0.1-SNAPSHOT.jar &
[2]  22271 运行中               nohup java -jar shopping-provider-0.0.1-SNAPSHOT.jar &
[3]- 22309 运行中               nohup java -jar gpmall-user-0.0.1-SNAPSHOT.jar &
[4]+ 22347 运行中               nohup java -jar gpmall-shopping-0.0.1-SNAPSHOT.jar &


[root@db2 ~]# jobs -l
[1]  22123 运行中               nohup java -jar user-provider-0.0.1-SNAPSHOT.jar &
[2]  22174 运行中               nohup java -jar shopping-provider-0.0.1-SNAPSHOT.jar &
[3]- 22208 运行中               nohup java -jar gpmall-user-0.0.1-SNAPSHOT.jar &
[4]+ 22283 运行中               nohup java -jar gpmall-shopping-0.0.1-SNAPSHOT.jar &

在mycat中部署网站前端

[root@mycat ~]# yum -y install nginx

将网页文件dist通过filezilla上传到db1上

删除原来的网页内容
rm -rf /usr/share/nginx/html/*

将自己的写的网页cp进去
cp -rvf dist/* /usr/share/nginx/html/

修改配置文件
vi /etc/nginx/conf.d/default.conf
        upstream myuser {
	    server 192.168.100.20:8082;
            server 192.168.100.30:8082;
	    ip_hash;
	}

        upstream myshopping {
	    server 192.168.100.20:8081;
            server 192.168.100.30:8081;
            ip_hash;
        }
        upstream mycashier {
	    server 192.168.200.20:8083;
            server 192.168.100.30:8083; 
            ip_hash;
        }

server {
	location /user {
	    proxy_pass http://myuser;
        }

    	location /shopping {
	    proxy_pass http://myshopping;
        }
        location /cashier {
	    proxy_pass http://mycashier;
        }

}

启动nginx,查看80端口是否启动

[root@mycat ~]# systemctl start nginx
[root@mycat ~]# ss -anlt | grep 80
LISTEN     0      128          *:80                       *:*                  
                

检查服务器:http://192.168.100.10

  • 11
    点赞
  • 16
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值