java常用技术栈和集群相关资料和事例

集群

目标:

一、mysql集群:

   

企业常用模式:

MMM:说得简单点,就是 MySQL 主主复制的管理器。当一台服务挂了可以自动切换到另外一台,同时还可以做主从复制

参考文档:详解 MySQL 高可用群集,MMM搭建高可用-CSDN博客

MYSQL主从复制、主主复制、双主多从配置:https://www.cnblogs.com/gspsuccess/p/9182545.html?utm_source=debugrun&utm_medium=referral

搭建普通主从复制

master1  192.168.11.109

master2  192.168.11.110

slave1        192.168.11.111

slave2        192.168.11.112

1.3台服务器搭建mysql主从复制.

目录:/root/dockerfile/mysql/config  /root/dockerfile/mysql/data  /root/dockerfile/mysql/log

docker pull mysql:5.7

创建mysql服务:

1.1 主mysql创建:

docker run -p 3301:3306 --name mysql-master01 \

-v /root/dockerfile/mysql/log:/var/log/mysql \

-v /root/dockerfile/mysql/data:/var/lib/mysql \

-v /root/dockerfile/mysql/config:/etc/mysql \

-e MYSQL_ROOT_PASSWORD=root \

-d mysql:5.7

docker run -p 3301:3306 --name mysql-master02 \

-v /root/dockerfile/mysql/log:/var/log/mysql \

-v /root/dockerfile/mysql/data:/var/lib/mysql \

-v /root/dockerfile/mysql/config:/etc/mysql \

-e MYSQL_ROOT_PASSWORD=root \

-d mysql:5.7

1.2 配置主master文件 my.cnf( /root/dockerfile/mysql/config新建)

#基础配置

[client]

default-character-set=utf8

[mysql]

default-character-set=utf8

[mysqld]

init_connect='SET collation_connection = utf8_unicode_ci'

#init_connect='SET collation_connection = utf8_general_ci'

init_connect='SET NAMES utf8'

character-set-server=utf8

collation-server=utf8_unicode_ci

skip-character-set-client-handshake

skip-name-resolve

#主从复制的配置主的  主和从的id都不能一样

server_id=1

log-bin=mysql-bin

read-only=0

# 同步的数据库

binlog-do-db=gulimall_ums

binlog-do-db=gulimall_pms

binlog-do-db=gulimall_oms

binlog-do-db=gulimall_sms

binlog-do-db=gulimall_wms

binlog-do-db=gulimall_admin

# 不需要同步的数据库

replicate-ignore-db=mysql

replicate-ignore-db=sys

replicate-ignore-db=information_schema

replicate-ignore-db=performance_schema

1.3 slave mysql的创建:

docker run -p 3303:3306 --name mysql-slave01 \

-v /root/dockerfile/mysql/log:/var/log/mysql \

-v /root/dockerfile/mysql/data:/var/lib/mysql \

-v /root/dockerfile/mysql/config:/etc/mysql \

-e MYSQL_ROOT_PASSWORD=root \

-d mysql:5.7

docker run -p 3304:3306 --name mysql-slave02 \

-v /root/dockerfile/mysql/log:/var/log/mysql \

-v /root/dockerfile/mysql/data:/var/lib/mysql \

-v /root/dockerfile/mysql/config:/etc/mysql \

-e MYSQL_ROOT_PASSWORD=root \

-d mysql:5.7

1.4 slave mysql的配置:

····································································································

[client]

default-character-set=utf8

[mysql]

default-character-set=utf8

[mysqld]

init_connect='SET collation_connection = utf8_unicode_ci'

init_connect='SET NAMES utf8'

character-set-server=utf8

collation-server=utf8_unicode_ci

skip-character-set-client-handshake

skip-name-resolve

#主从复制的配置-从1的  

server_id=3

log-bin=mysql-bin

read-only=1

# 同步的数据库

binlog-do-db=gulimall_ums

binlog-do-db=gulimall_pms

binlog-do-db=gulimall_oms

binlog-do-db=gulimall_sms

binlog-do-db=gulimall_wms

binlog-do-db=gulimall_admin

# 不需要同步的数据库

replicate-ignore-db=mysql

replicate-ignore-db=sys

replicate-ignore-db=information_schema

replicate-ignore-db=performance_schema

·········································································································

[client]

default-character-set=utf8

[mysql]

default-character-set=utf8

[mysqld]

init_connect='SET collation_connection = utf8_unicode_ci'

init_connect='SET NAMES utf8'

character-set-server=utf8

collation-server=utf8_unicode_ci

skip-character-set-client-handshake

skip-name-resolve

#主从复制的配置-从2的  

server_id=4

log-bin=mysql-bin

read-only=1

# 同步的数据库

binlog-do-db=gulimall_ums

binlog-do-db=gulimall_pms

binlog-do-db=gulimall_oms

binlog-do-db=gulimall_sms

binlog-do-db=gulimall_wms

binlog-do-db=gulimall_admin

# 不需要同步的数据库

replicate-ignore-db=mysql

replicate-ignore-db=sys

replicate-ignore-db=information_schema

replicate-ignore-db=performance_schema

··························································································································

重启每个mysql服务

1.5 为master授权用户来同步数据。

添加同步用户,在master执行:

GRANT REPLICATION SLAVE ON *.* to 'backup'@'%' identified by '123456';

查看master状态

show master status;

mysql> show master status;

+------------------+----------+---------------------------------------------------------------------------------+------------------+-------------------+

| File             | Position | Binlog_Do_DB                                                                    | Binlog_Ignore_DB | Executed_Gtid_Set |

+------------------+----------+---------------------------------------------------------------------------------+------------------+-------------------+

| mysql-bin.000001 |      431 | gulimall_ums,gulimall_pms,gulimall_oms,gulimall_sms,gulimall_wms,gulimall_admin |                  |                   |

+------------------+----------+---------------------------------------------------------------------------------+------------------+-------------------+

1.6  2台从msql操作执行:

change master to master_host='192.168.11.109',master_user='backup',master_password='123456',master_log_file='mysql-bin.000001',master_log_pos=0,master_port=3301;

启动命令,开始同步:

start slave;

查看状态:

show slave status;

在主数据库新建库、表、数据,从数据库也会同步

2.分库分表的操作也可以读写分离和mycat差不多 。这里使用ShardingSphere

Sharding-Proxy :: ShardingSphere

docker pull apache/sharding-proxy

1.添加驱动。将驱动包放入上面lib里面,lib里面后缀不是为jar的全部改为jar.

2.修改配置文件:

server.yaml  认证相关配置:

config-sharding.yaml 分库分表的配置:

分库分表的策略:

t_order表 actualDataNodes这里表示分成2个表来存储,ds_0/ds_1这两个库需要看上面datasource。后面是几个表来存这里是t_order_0/t_order_1。

shardingColumn:order_id 这里是根据哪个字段来进行分开存数据,这里是根据订单号来

algorithmExpression:t_order_${order_id%2}    存在哪个表。对应上面t_order_xx的表。

type:SNOWFLAKE   自动生产雪花算法

column:order_id  哪个字段进行雪花算法

bindingTables:-t_order,t_order_item 这里表示两张是关联表,查询的时候就不会全部查询。

shardingColum:user_id  根据用户id来存不同的库,这样可以xx地区的用户就存xx库

algorithmExpression:ds_${user_id%2} 在根据取余来存不同的库

defaultTableStrategu:

none: 默认的分表策略

读写分离配置:(可以写多个配置都会自动读取的)

启动./start.sh port

连接:ip:port  账户密码是上面server里面配置的。

需要配置主从复制:

sharding连接的数据库:

原始数据库中:

二、redis集群:

1.客户端分区:

2.代理分区:

*3.redis-cluster:

高可用:

2.Redis-CLuter

https://redis.io/docs/management/scaling/

2.1槽

2.2 一致性hash

3.部署Cluster

3主3从方式,从为了同步备份,主进行slot数据分片

3.1 创建6个redis的脚本

for port in $(seq 7001 7006); \

do \

mkdir -p  /root/dockerfile/redis/node-${port}/data

mkdir -p  /root/dockerfile/redis/node-${port}/conf

touch  /root/dockerfile/redis/node-${port}/conf/redis.conf

cat << EOF >> /root/dockerfile/redis/node-${port}/conf/redis.conf

port ${port}

cluster-enabled yes

cluster-config-file nodes.conf

cluster-node-timeout 5000

cluster-announce-ip 192.168.11.109

cluster-announce-port ${port}

cluster-announce-bus-port 1${port}

appendonly yes

protected-mode no

EOF

docker run -p ${port}:${port} -p 1${port}:1${port} --name redis-${port} \

-v /root/dockerfile/redis/node-${port}/data:/data \

-v /root/dockerfile/redis/node-${port}/conf/redis.conf:/etc/redis/redis.conf \

-d redis:5.0.7 redis-server /etc/redis/redis.conf; \

done

///批量控制

需要先安装 awk   :yum install -y gawk

docker restart $(docker ps -a | grep redis-700 | awk '{print $1}')

docker stop $(docker ps -a | grep redis-700 | awk '{print $1}')

docker rm $(docker ps -a | grep redis-700 | awk '{print $1}')

如果报错》可以删除data目前下文件在重试。

3.2 使用redis建立集群  (--cluster-replicas 1 每个主节点都有一个副节点)

docker exec -ti redis-7001 bash

redis-cli --cluster create 192.168.11.109:7001 192.168.11.109:7002 192.168.11.109:7003 192.168.11.109:7004 192.168.11.109:7005 192.168.11.109:7006 --cluster-replicas 1

进入reids后连接客户端:

redis-cli -c -h 192.168.11.109 -p 7001

查看集群状态:

cluster info

cluster nodes

程序连接集群:主要是在配置上。

1.导入依赖

<dependency>

           <groupId>org.springframework.boot</groupId>

           <artifactId>spring-boot-starter-web</artifactId>

       </dependency>

        <dependency>

           <groupId>org.springframework.boot</groupId>

           <artifactId>spring-boot-starter-data-redis</artifactId>

       </dependency>

       <dependency>

           <groupId>org.springframework.data</groupId>

           <artifactId>spring-data-redis</artifactId>

       </dependency>

————————————

2.配置

首先是 application.yml

spring:

 redis:

   database: 0

   password: 123456

   jedis:

     pool:

       max-active: 8

       max-wait: -1

       max-idle: 8

       min-idle: 0

   timeout: 10000

   cluster:

     nodes:

       - 192.168.1.122:9001

       - 192.168.1.122:9002

       - 192.168.1.122:9003

       - 192.168.1.122:9004

       - 192.168.1.122:9005

       - 192.168.1.122:9006

3.使用:

@Autowired

private StringRedisTemplate stringRedisTemplate;

三、Elasticsearch集群:

1.单节点

2.集群健康

3.分片

4.新增节点

5.水平扩容-启动第三个节点

6.应对故障

7.问题和解决

*8.集群结构

 

一台服务2个es、一个data一个master

   

*****************************集群搭建*************************************************

1.#防止jvm报错

运行sysctl -w vm.max_map_count=262144

永久修改:

echo vm.max_map.count=262144 >> /etc/sysctl.conf

2.如果有6台服务器可以不用这个 步骤,创建docker自己的网络

查看docker网络:docker network ls

docker network create --driver bridge --subnet=172.18.12.0/16 --gateway=172.18.1.1 mynet

查看网络详情:docker network inspect mynet

3.创建3个master节点:

docker pull elasticsearch:7.4.2   存储和就检索    索引=db  类型=table  文档=数据

docker pull kibana:7.4.2  可视化工具

新建目录:  需要给所有目录权限chmod 777 *

 /root/dockerfile/es  存放es数据

 /root/dockerfile/es/data    挂载数据目录

 /root/dockerfile/es/config  挂着配置目录

 /root/dockerfile/es/plugins 放插件

···········································master·····················································

for port in $(seq 1 3); \

do \

mkdir -p  /root/dockerfile/es/master-${port}/data

mkdir -p  /root/dockerfile/es/master-${port}/config

mkdir -p  /root/dockerfile/es/master-${port}/plugins

chmod -R 777 /root/dockerfile/es/master-${port}

cat << EOF >> /root/dockerfile/es/master-${port}/config/elasticsearch.yml

cluster.name: my-es #区群的名臣,同一个集群该值必须设置成相同的

node.name: es-master-${port} #该节点的名字

node.master: true #该节点有机会成为master节点

node.data: false #该节点可以存储数据

network.host: 0.0.0.0

http.host: 0.0.0.0 #所有http均可访问

http.port: 920${port}

transport.tcp.port: 930${port}

#discovery.zen.minimum_master_nodes: 2 #设置这个参数来保证集群中的节点可以知道其它N个有master资格的节点。官方推荐(N/2)+1  es7不用了

discovery.zen.ping_timeout: 10s #设置集群中自动发现其他节点时ping连接的超时时间

 # 设置集群中的master节点的初始列表,可以通过这些节点来自动发现其它新加入集群的节点,es7的新增配置

discovery.seed_hosts: ["172.18.12.21:9301","172.18.12.22:9302","172.18.12.23:9303"]

cluster.initial_master_nodes: ["172.18.12.21"] #新集群初始时的选主节点,es7的新配置

EOF

docker run --name es-node-${port} \

-p 920${port}:920${port} -p 930${port}:930${port} \

--network=mynet --ip 172.18.12.2${port} \

-e ES_JAVA_OPTS="-Xms300m -Xmx300m" \

-v  /root/dockerfile/es/master-${port}/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml \

-v  /root/dockerfile/es/master-${port}/data:/usr/share/elasticsearch/data \

-v  /root/dockerfile/es/master-${port}/plugins:/usr/share/elasticsearch/plugins \

-d elasticsearch:7.4.2

done

···········································································

···········································node·····················································

for port in $(seq 4 6); \

do \

mkdir -p  /root/dockerfile/es/node-${port}/data

mkdir -p  /root/dockerfile/es/node-${port}/config

mkdir -p  /root/dockerfile/es/node-${port}/plugins

chmod -R 777 /root/dockerfile/es/node-${port}

cat << EOF >> /root/dockerfile/es/node-${port}/config/elasticsearch.yml

cluster.name: my-es #区群的名臣,同一个集群该值必须设置成相同的

node.name: es-node-${port} #该节点的名字

node.master: false #该节点有机会成为master节点

node.data: true #该节点可以存储数据

network.host: 0.0.0.0

#network.publish_host: 192.168.11.109 #互相通信ip,要设置为本地可被访问的ip,否则无法通信

http.host: 0.0.0.0 #所有http均可访问

http.port: 920${port}

transport.tcp.port: 930${port}

#discovery.zen.minimum_master_nodes: 2 #设置这个参数来保证集群中的节点可以知道其它N个有master资格的节点。官方推荐(N/2)+1  es7不用了

discovery.zen.ping_timeout: 10s #设置集群中自动发现其他节点时ping连接的超时时间

 # 设置集群中的master节点的初始列表,可以通过这些节点来自动发现其它新加入集群的节点,es7的新增配置

discovery.seed_hosts: ["172.18.12.21:9301","172.18.12.22:9302","172.18.12.23:9303"]

cluster.initial_master_nodes: ["172.18.12.21"] #新集群初始时的选主节点,es7的新配置

EOF

docker run --name es-node-${port} \

-p 920${port}:920${port} -p 930${port}:930${port} \

--network=mynet --ip 172.18.12.2${port} \

-e ES_JAVA_OPTS="-Xms300m -Xmx300m" \

-v  /root/dockerfile/es/node-${port}/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml \

-v  /root/dockerfile/es/node-${port}/data:/usr/share/elasticsearch/data \

-v  /root/dockerfile/es/node-${port}/plugins:/usr/share/elasticsearch/plugins \

-d elasticsearch:7.4.2

done

···········································································

需要先安装 awk   :yum install -y gawk

docker restart $(docker ps -a | grep es-node- | awk '{print $1}')

docker stop $(docker ps -a | grep es-node- | awk '{print $1}')

docker rm $(docker ps -a | grep es-node- | awk '{print $1}')

查看节点:

http://192.168.11.119:9206/_cat/_nodes

http://192.168.11.109:9201/_cluster/stats?pretty

项目中使用集群:

方式1:

elasticsearch:

  nodes: xxx.xxx.xxx.xxx:9200,xxx.xxx.xxx.xxx:9200,xxx.xxx.xxx.xxx:9200

  schema: http

  max-connect-total: 100

  max-connect-per-route: 50

  connection-request-timeout-millis: 3000

  socket-timeout-millis: 30000

  connect-timeout-millis: 3000

方式2:

data:

  elasticsearch:

    cluster-name: media

    cluster-nodes: xxx.xxx.xxx.xxx:9300,xxx.xxx.xxx.xxx:9300,xxx.xxx.xxx.xxx:9300

安装时这个bug:

OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will l

目前操作安装jdk9和配置环境。

wget https://repo.huaweicloud.com/java/jdk/9.0.1+11/jdk-9.0.1_linux-x64_bin.tar.gz

未解决。

新的解决:

先运行上面然后拷贝文件过来修改:

-XX:+UseConcMarkSweepGC 改为 -XX:+UseG1GC

docker cp es:/usr/share/elasticsearch/config/jvm.options /root/data

docker cp /root/data/jvm.options es:/usr/share/elasticsearch/config/jvm.options

四、RabbitMq集群:

1.集群形式

 

拉取镜像:

docker pull rabbitmq:management 带后台管理

普通单个安装运行:

docker run -d --name rabbitmq -p 5671:5671 -p 5672:5672 -p 4369:4369 -p 25672:25672 -p 15671:15671 -p 15672:15672 rabbitmq:management

打开后台管理:

http://192.168.11.119:15672/   guest  guest

2.搭建集群:

2.1创建三个mq

`````````````````````````````````````````````````````````````````

mkdir -p  /root/dockerfile/rabbitmq

cd /root/dockerfile/rabbitmq

mkdir rabbitmq01 rabbitmq02 rabbitmq03

docker run -d --hostname rabbitmq01 --name rabbitmq01 \

-v /root/dockerfile/rabbitmq/rabbitmq01:/var/lib/rabbitmq -p 15673:15672 -p 5673:5672 \

-e RABBITMQ_ERLANG_COOKIE='panpan' \

rabbitmq:management

docker run -d --hostname rabbitmq02 --name rabbitmq02 \

-v /root/dockerfile/rabbitmq/rabbitmq02:/var/lib/rabbitmq -p 15674:15672 -p 5674:5672 \

-e RABBITMQ_ERLANG_COOKIE='panpan' \

--link rabbitmq01:rabbitmq01 \

rabbitmq:management

docker run -d --hostname rabbitmq03 --name rabbitmq03 \

-v /root/dockerfile/rabbitmq/rabbitmq03:/var/lib/rabbitmq -p 15675:15672 -p 5675:5672 \

-e RABBITMQ_ERLANG_COOKIE='panpan' \

--link rabbitmq01:rabbitmq01 \

--link rabbitmq02:rabbitmq02 \

rabbitmq:management

```````````````````````````````````````````````````````````````````````````````

-e RABBITMQ_ERLANG_COOKIE='panpan'  三个集群都会调用相同的cookie  ,节点认证作用,集群部署时候需要同步该值

--link rabbitmq01:rabbitmq01 可以访问rabbitmq01

--hostname 设置容器主机名

2.2 节点加入集群:

docker exec -ti rabbitmq01 /bin/bash

rabbitmqctl stop_app

rabbitmqctl reset

rabbitmqctl start_app

exit

进入第二个节点:

docker exec -ti rabbitmq02 /bin/bash

rabbitmqctl stop_app

rabbitmqctl reset

rabbitmqctl join_cluster --ram rabbit@rabbitmq01

rabbitmqctl start_app

exit

进入第三个节点:

docker exec -ti rabbitmq03 /bin/bash

rabbitmqctl stop_app

rabbitmqctl reset

rabbitmqctl join_cluster --ram rabbit@rabbitmq01

rabbitmqctl start_app

当前是普通集群:

改造为镜像集群:

随便进入一个容器:docker exec -ti rabbitmq01 /bin/bash

rabbitmqctl set_policy -p / ha "^" '{"ha-mode":"all","ha-sync-mode":"automatic"}'

rabbitmqctl list_policies -p /;

程序中使用:

//RabbitMQ单机

spring:

 rabbitmq:

  host: localhost

  port: 5672

  username: your_username

  password: your_password

//或者 RabbitMQ单机,只使用addresses

spring:

 rabbitmq:

  addresses:ip1:port1,ip2:port2,ip3:port3

  username: your_username

  password: your_password

其它相关:

部署应用:

将jar包和Dockerfile放在同一个目录执行docker build↑  使用运行时候的配置。

配置k8s部署的yaml属性。里面名字改一下就行了。

将本地部署或者开发的镜像打包放入镜像仓库,然后供k8s调用。

docker-hup因为国外所以比较慢。

 

推送到ali就不写了。

前端打包:npm run build

打包文件和Dockerfile放在一起。

  • 6
    点赞
  • 7
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值