springBoot整合redis,elasearch,dubbo

spring boot整合redis

安装redis(docker)

docker pull redis:latest
docker run -itd --name redis-test -p 6379:6379 redis

依赖

 <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-data-redis</artifactId>
        </dependency>
        <!-- commons-pool2 对象池依赖 -->
        <dependency>
            <groupId>org.apache.commons</groupId>
            <artifactId>commons-pool2</artifactId>
        </dependency>
			

不需要写版本,springboot父依赖定好了。

配置文件application.yml

# Spring
spring:
  redis:
    database: 1                            # 选择哪个库,默认0库
    host: 127.0.0.1
    port: 6379
    timeout: 30000
    password:
    lettuce:
      pool:
        max-active: 1024
        max-wait: 10000ms
        max-idle: 200
        min-idle: 5

配置类:序列化,缓存管理器配置(redis缓存)

import com.fasterxml.jackson.annotation.JsonAutoDetect;
import com.fasterxml.jackson.annotation.PropertyAccessor;
import com.fasterxml.jackson.databind.ObjectMapper;
import org.springframework.cache.CacheManager;
import org.springframework.cache.annotation.CachingConfigurerSupport;
import org.springframework.cache.annotation.EnableCaching;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.data.redis.cache.RedisCacheConfiguration;
import org.springframework.data.redis.cache.RedisCacheManager;
import org.springframework.data.redis.cache.RedisCacheWriter;
import org.springframework.data.redis.connection.RedisConnectionFactory;
import org.springframework.data.redis.core.RedisTemplate;
import org.springframework.data.redis.core.StringRedisTemplate;
import org.springframework.data.redis.serializer.GenericJackson2JsonRedisSerializer;
import org.springframework.data.redis.serializer.Jackson2JsonRedisSerializer;
import org.springframework.data.redis.serializer.StringRedisSerializer;

import java.text.SimpleDateFormat;
import java.time.Duration;

@Configuration
@EnableCaching
public class RedisConfig extends CachingConfigurerSupport {
    //缓存管理器
    @Bean
    public CacheManager cacheManager(RedisConnectionFactory redisConnectionFactory) {
        RedisCacheConfiguration redisCacheConfiguration = RedisCacheConfiguration.defaultCacheConfig()
                .entryTtl(Duration.ofHours(1)); // 设置缓存有效期一小时
        return RedisCacheManager
                .builder(RedisCacheWriter.nonLockingRedisCacheWriter(redisConnectionFactory))
                .cacheDefaults(redisCacheConfiguration).build();
    }

    @Bean
    public RedisTemplate<String, Object> redisTemplate(RedisConnectionFactory factory){
        RedisTemplate<String, Object> template = new RedisTemplate();
        template.setConnectionFactory(factory);
        template.setKeySerializer(new StringRedisSerializer());
        template.setValueSerializer(new GenericJackson2JsonRedisSerializer());
        template.setHashKeySerializer(new StringRedisSerializer());
        template.setHashValueSerializer(new GenericJackson2JsonRedisSerializer());
        return template;
    }
   
}

第一个bean:配置redis缓存

第二个beasn:配置redis序列化,可以将对象存到redis里,在图形化界面看来是一个json串形式;如果你不存对象,redis自带StringRedisTemplate template,满足字符串操作。

缓存

开启:spring boot主程序加注解

@EnableCaching

注解,service层:查询方法上

缓存是为了优化查询速度,主要体现在查询上

**查询:@Cacheable **

代码:

    @Autowired
    private ElasticsearchRestTemplate elasticsearchRestTemplate;
    @Cacheable(value = "goodsPage",key="#searchKey+'-'+#pageNum+''+#pageSize")
    public List<Goods> pageSearch(String searchKey,Integer pageNum,Integer pageSize){
        NativeSearchQueryBuilder nativeSearchQueryBuilder = new
                NativeSearchQueryBuilder();
        NativeSearchQuery query = nativeSearchQueryBuilder
         /**
          * 第一个参数:关键词
          * 第二个参数:对应es的字段
          */
                .withQuery(QueryBuilders.multiMatchQuery(searchKey,
                        "goodsName"))
          /**
           * 第一个参数:当前页,0开始
           * 第二个参数:每个条数
           * 第三个参数:排序对象
           * 升序/降序
           * 比较字段
           */
             // .withPageable(PageRequest.of(0,5,Sort.Direction.DESC,"goodsId","marketPrice"))
             // 分页
             .withPageable(PageRequest.of(pageNum-1, pageSize))
             //
              .withSort(SortBuilders.fieldSort("marketPrice").order(SortOrder.ASC))
             //高亮,默认样式<em></em>(斜体)
             // .withHighlightFields(new HighlightBuilder.Field("goodsName"))
             //高亮,指定样式
              .withHighlightBuilder(new
                        HighlightBuilder().field("goodsName").preTags("<span style='color:red;'>").postTags(" </span>"))
                        .build();
             //返回结果
             List<Goods> result=new ArrayList<>();
        SearchHits<Goods> search = elasticsearchRestTemplate.search(query,
                Goods.class);
        for (SearchHit<Goods> searchHit : search) {
            Goods goods=null;
            String highlightMessage = searchHit.getHighlightField("goodsName").get(0);
            System.out.println("高亮-----------");
            System.out.println(highlightMessage);
            //结果对象
            goods=searchHit.getContent();
            goods.setGoogsNameH1(highlightMessage);
            goods.setGoodsNum((int) search.getTotalHits());
            System.out.println("数据:"+goods.getGoodsNum());
            result.add(goods);
        }
       return result;
    }

3.访问接口

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-AeB5fwKI-1610374348081)(springBoot整合redis,elasearch.assets/image-20210111165255536.png)]

数据:

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-5QDcWwcO-1610374348086)(springBoot整合redis,elasearch.assets/image-20210111165350144.png)]

缓存数据:

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-AY8yDnPT-1610374348088)(springBoot整合redis,elasearch.assets/image-20210111165521190.png)]

结合注解和查询参数:

@Cacheable(value = "goodsPage",key="#searchKey+'-'+#pageNum+''+#pageSize")

value:作为redis key的第一部分

key:作为redis key的第二部分

第一部分和第二部分以::连接,显示出类似于hash般的结构

在注解里取值:#变量名或者对于对象#对象名.属性名,多个值拼接在key里面的多个值之间加:+'分隔符'+或者#arg1#arg2...

修改:@CachePut

应用到写数据的方法上,如新增/修改方法,调用方法时会自动把相应的数据放入缓存

 @Override
    @Transactional
    @CachePut(value = "goodsCategory",key="#goodsCategory")
    public int saveCategory(GoodsCategory goodsCategory) {
        //GoodsCategory(id=null, name=手机 、 数码 、 通信123, mobileName=数码产品321, parentId=35, parentIdPath=null, level=3, sortOrder=12, isShow=1, image=, isHot=null, catGroup=1, commissionRate=12)
        //parentIdPath  赋值
        String parentPath=goodsCategoryMapper.getParentIdPath(goodsCategory.getParentId());
        Integer maxID = goodsCategoryMapper.getMaxID()+1;
        String parentIdPath=parentPath+"_"+maxID;
        goodsCategory.setParentIdPath(parentIdPath);
        return goodsCategoryMapper.insert(goodsCategory);
    }

删除@CacheEvict

增/修改”方法上添加@CacheEvict(value=”goods”, key= “#变量名”)注解,当方法调用成功后,会删除缓存testValue上key值为testKey的缓存。

总结

数据操作严格分为两类:

修改:修改,添加,删除

@CachePut:更新缓存数据

@CacheEvict:删除缓存数据

这两个注解都可以用在增删改上,他们的区别是:

  • @CachePut:更新缓存,效率高,但是其他系统对数据库数据修改,就会导致读脏数据
  • @CacheEvict:删除缓存,数据一致性更强,但如果数据删除后,短时间大量数据请求这个数据,就会引发缓存击穿

查询:查询

@Cacheable :数据存缓存

只有注解的value值相同,key都是根据同一属性建立,注解才会操作同一份缓存里的数据:

@Cacheable(value = "goodsPage",key="#searchKey+'-'+#pageNum+''+#pageSize")

对查询结果缓存到数据库:格式为String

redis:

  • key:注解的value值::注解的key值
  • value:查询结果序列化

@CachePut(value = "goodsPage",key="#searchKey+'-'+#pageNum+''+#pageSize")

修改值,一般为sevice层对数据的更新操作,注意: 要与查询的value对应上,key的格式也需要相同

 @Override
    @Cacheable(value = "goodsCategory", key="#pId")
    public List<GoodsCategory> selectCategoryParentId(Integer pId) {
        Map<String, Object> params = new HashMap<>();
        params.put("parent_id", pId);
        List<GoodsCategory> goodsCategories =  goodsCategoryMapper.selectByMap(params);
        return goodsCategories;
    }

    @Override
    @Transactional
    @CachePut(value = "goodsCategory",key="#goodsCategory.parentId")
    public int saveCategory(GoodsCategory goodsCategory) {
        //GoodsCategory(id=null, name=手机 、 数码 、 通信123, mobileName=数码产品321, parentId=35, parentIdPath=null, level=3, sortOrder=12, isShow=1, image=, isHot=null, catGroup=1, commissionRate=12)
        //parentIdPath  赋值
        String parentPath=goodsCategoryMapper.getParentIdPath(goodsCategory.getParentId());
        Integer maxID = goodsCategoryMapper.getMaxID()+1;
        String parentIdPath=parentPath+"_"+maxID;
        goodsCategory.setParentIdPath(parentIdPath);
        return goodsCategoryMapper.insert(goodsCategory);
    }

@Cacheable和@CachePut注解的value值相同,并且key也是以parentId建立的,这个@CachePut才能发挥效果。

@CacheEvict(value = "goodsPage",key="#searchKey+'-'+#pageNum+''+#pageSize")

 @Override
    @Cacheable(value = "goodsCategory", key="#pId")
    public List<GoodsCategory> selectCategoryParentId(Integer pId) {
        Map<String, Object> params = new HashMap<>();
        params.put("parent_id", pId);
        List<GoodsCategory> goodsCategories =  goodsCategoryMapper.selectByMap(params);
        return goodsCategories;
    }

     @Override
    @Transactional
    @CacheEvict(value = "goodsCategory",key="#goodsCategory.parentId")
    public int saveCategory(GoodsCategory goodsCategory) {
        //GoodsCategory(id=null, name=手机 、 数码 、 通信123, mobileName=数码产品321, parentId=35, parentIdPath=null, level=3, sortOrder=12, isShow=1, image=, isHot=null, catGroup=1, commissionRate=12)
        //parentIdPath  赋值
        String parentPath=goodsCategoryMapper.getParentIdPath(goodsCategory.getParentId());
        Integer maxID = goodsCategoryMapper.getMaxID()+1;
        String parentIdPath=parentPath+"_"+maxID;
        goodsCategory.setParentIdPath(parentIdPath);
        return goodsCategoryMapper.insert(goodsCategory);
    }

@Cacheable和 @CacheEvict注解的value值相同,并且key也是以parentId建立的,这个@CachePut才能发挥效果。

springBoot 整合elasearch

安装(centos集群)

1.虚拟机:192.168.10.100 ,192.168.10.101 ,192.168.10.102

2.下载:elasticsearch-7.6.2-linux-x86_64.tar.gz

3.发送给虚拟机:192.168.10.100 使用xftp或者finalShell右键上传

4.发给其他两台(不使用命令使用步骤3上传也可以):

scp 192.168.10.101@root /root/elasticsearch-7.6.2-linux-x86_64.tar.gz /root/

scp 192.168.10.102@root /root/elasticsearch-7.6.2-linux-x86_64.tar.gz /root/

5.开始批量操作,三台虚拟机一起

tar zxvf elasticsearch-7.6.2-linux-x86_64.tar.gz -C /usr/local/
#elasearch不能以root运行
groupadd es
useradd es -g es

#修改权限
chown -Rf es:es /usr/local/elasticsearch-7.6.2/

#jdk如果该centos安装的jdk11就不用下面操作
#修改此文件文件末尾添加
vim bin/elasticsearch-env

JAVA_HOME="/usr/local/elasticsearch-7.6.2/jdk/"

#远程访问设置
#实际环境添加允许访问的ip
vi config/elasticsearch.yml

network.host: 0.0.0.0

#修改运行数据
#末尾添加
vi /etc/security/limits.conf

es soft nofile 65535
es hard nofile 65535
es soft nproc 4096
es hard nproc 4096

#修改jvm运行内存
vi /etc/sysctl.conf

vm.max_map_count = 262144
#重新加载系统内存
sysctl -p

#修改elasearchjvm垃圾回收器:不改会有警告提示; UseConcMarkSweepGC 在jdk9就标注过时
#将: -XX:+UseConcMarkSweepGC 改为: -XX:+UseG1GC
vi config/jvm.options

#配置集群环境
vi config/elasticsearch.yml

#===========================================每台电脑不同的node.name:分别为:node-1,node-2,node-3
#:如果设置了 node.name 需要将 cluster.initial_master_nodes 改为对应节点名称。
cluster.name: es # 集群名称,同一集群要一致
node.name: node-1 # 集群下各节点名称
http.port: 9200 # 端口
# 跨域请求配置(为了让类似head的第三方插件可以请求es)
http.cors.enabled: true
http.cors.allow-origin: "*"
# 集群发现配置
discovery.seed_hosts: ["192.168.10.100", "192.168.10.101", "192.168.10.102"]
cluster.initial_master_nodes: ["node-1", "node-2", "node-3"]
discovery.zen.ping_timeout: 60s
#=======================================================================

#elasearch占内存特别高,我是 1G 1cup,修改jvm参数512m
vi config/jvm.options

-Xms512m
-Xmx512m

#大功告成,有些文件修改需要root权限,所以如果文件修改失败需要切换root
su root
#最后防止权限的问题
chown -Rf es:es /usr/local/elasticsearch-7.6.2/
#运行,3台集群,这里先运行两台
/usr/local/elasticsearch/elasticsearch-7.6.2/bin/elasticsearch

访问:http://192.168.10.100:9200/_cluster/health?pretty

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-vNkzUnsf-1610374348092)(springBoot整合redis,elasearch.assets/image-20210111194232555.png)]

这里可以看到有两台节点。这里安装完成;

谷歌浏览器图形界面:ElasticSearch Head

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-8tFJ0Pes-1610374348095)(springBoot整合redis,elasearch.assets/image-20210111194729923.png)]

或者微软 Edge浏览器Elasticvue插件

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-vcuQGAJO-1610374348096)(springBoot整合redis,elasearch.assets/image-20210111194951459.png)]

这里面的数据是通过Logstash导入

mysql 数据导入elasearch

ES官网:https://www.elastic.co/products/logstash

下载地址:https://www.elastic.co/cn/downloads/logstash

Logstash 是开源的服务器端数据处理管道,能够同时从多个来源采集数据,转换数据,然后将数据发 送到您最喜欢的 “存储库” 中。(我们的存储库当然是 Elasticsearch。)

mkdir -p /usr/local/logstash
tar -zxvf logstash-7.6.2.tar.gz -C /usr/local/logstash/
cd /usr/local/logstash/logstash-7.6.2
bin/logstash -e 'input { stdin {} } output { stdout {} }'

出来message:hello-world就成功开启了

关闭logstash:ctrl c

上传mysql驱动到:/usr/local/logstash/logstash-7.6.2/logstash-core/lib/jars/

jdbc.conf:

cd /usr/local/logstash/logstash-7.6.2
vi jdbc.conf

内容

input {
stdin {
}
jdbc {
# 配置数据库信息
jdbc_connection_string => "jdbc:mysql://localhost:3306/shop?
useUnicode=true&characterEncoding=UTF-8&serverTimezone=Asia/Shanghai"
jdbc_driver_class => "com.mysql.cj.jdbc.Driver"
jdbc_user => "root"
jdbc_password => "root"
jdbc_paging_enabled => "true"
jdbc_page_size => "50000"
jdbc_default_timezone => "Asia/Shanghai"
# 执行 sql 语句文件
statement_filepath => "/usr/local/logstash/logstash-7.6.2/jdbc.sql"
# 定时字段 各字段含义(由左至右)分、时、天、月、年,全部为*默认含义为每分钟都更新
schedule => "* * * * *"
# 是否将 sql 中 column 名称转小写
lowercase_column_names => false
}
}
output {
elasticsearch {
hosts => ["127.0.0.1:9200"]
index => "shop"
# 文档_id,%{goods_id}意思是取查询出来的goods_id的值,并将其映射到es的_id字段中
# 文档_id,%{goodsId}如果是别名,意思是取查询出来的goodsId的值,并将其映射到es的_id字段
中
document_id => "%{goodsId}"
}
stdout {
codec => json_lines
}
}

修改成你的mysql连接,用户名和密码

jdbc.sql:你的sql查询语句:例如

SELECT
goods_id goodsId,
goods_name goodsName,
market_price marketPrice,
original_img originalImg
FROM
t_goods

检查jdbc.conf正确性

bin/logstash -f /usr/local/logstash/logstash-7.6.2/jdbc.conf -t

创建索引库

curl -X PUT http://localhost:9200/shop -H 'Content-Type:application/json' -d'{
"settings": {
"number_of_shards": 5,
"number_of_replicas": 1
}
}'

设置mapping

curl -XPOST http://localhost:9200/shop/_mapping -H 'Content-Type:application/json' -
d'{
"properties": {
"goodsName": {
"type": "text",
"analyzer": "ik_max_word",
"search_analyzer": "ik_max_word"
}
}
}'

运行导入数据:

cd /usr/local/logstash/logstash-7.6.2/
bin/logstash -f /usr/local/logstash/logstash-7.6.2/jdbc.conf

maven依赖

<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-elasticsearch</artifactId>
</dependency>

application.yml

#配置es
spring:
  elasticsearch:
    rest:
      uris: 192.168.10.100:9200,192.168.10.100:9201,192.168.10.100:9202

spring boot开始整代码

pojo类:

为需要使用索引库的实体类加上注解 @Document 部分属性如下

indexName=“索引库名”

shards = 分片数量(默认1)

replicas = 副本数量(默认1)

为id属性 添加 @Id 注释

各个字段加上注解并制定类型 @Field 部分属性如下

type= FieldType.枚举 : 指定字段类型 Keyword不分词, Text分词 对应着elasticsearch的字 段类型

为需要分词的字段添加分词器 analyzer=“分词器名” (ik分词器固定写法 ik_max_word )

是否创建索引 createIndex=boolean(默认true)

import lombok.Data;
import org.springframework.data.annotation.Id;
import org.springframework.data.elasticsearch.annotations.Document;
import org.springframework.data.elasticsearch.annotations.Field;
import org.springframework.data.elasticsearch.annotations.FieldType;
import java.io.Serializable;
import java.math.BigDecimal;
import java.util.Date;

/**
 * @author lwf
 * @since 1.0.0
 */
@Document(indexName = "shop",shards = 5,replicas = 1,createIndex = false)
@Data
public class Goods implements Serializable {
    /**
     * 商品id
     */
    @Id
    private Integer goodsId;
    /**
     * 商品名称
     */
    @Field(type = FieldType.Text,analyzer = "ik_max_word")
    private String goodsName;
    /**
     * 市场价
     */
    @Field(type = FieldType.Double)
    private BigDecimal marketPrice;
    /**
     * 商品上传原始图
     */
    @Field(type = FieldType.Keyword)
    private String originalImg;
     //高亮名字
    private String googsNameH1;
     //数量
    private Integer goodsNum;
    //日期
    private Date addTime;
}

查询代码

import com.xxxx.protal.dao.GoodsRepository;
import com.xxxx.protal.pojo.Goods;
import org.elasticsearch.index.query.QueryBuilders;
import org.elasticsearch.search.fetch.subphase.highlight.HighlightBuilder;
import org.elasticsearch.search.sort.SortBuilders;
import org.elasticsearch.search.sort.SortOrder;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.cache.annotation.Cacheable;
import org.springframework.data.domain.PageRequest;
import org.springframework.data.elasticsearch.core.ElasticsearchRestTemplate;
import org.springframework.data.elasticsearch.core.SearchHit;
import org.springframework.data.elasticsearch.core.SearchHits;
import org.springframework.data.elasticsearch.core.query.NativeSearchQuery;
import org.springframework.data.elasticsearch.core.query.NativeSearchQueryBuilder;
import org.springframework.stereotype.Service;

import java.util.ArrayList;
import java.util.List;

/**
 * @author lwf
 * @title: EsSearchService
 * @projectName shop
 * @description: TODO
 * @date 2021/1/515:24
 */
@Service
public class EsSearchService {
    @Autowired
    private ElasticsearchRestTemplate elasticsearchRestTemplate;
    @Cacheable(value = "goodsPage",key="#searchKey+'-'+#pageNum+''+#pageSize")
    public List<Goods> pageSearch(String searchKey,Integer pageNum,Integer pageSize){
        NativeSearchQueryBuilder nativeSearchQueryBuilder = new
                NativeSearchQueryBuilder();
        NativeSearchQuery query = nativeSearchQueryBuilder
         /**
          * 第一个参数:关键词
          * 第二个参数:对应es的字段
          */
                .withQuery(QueryBuilders.multiMatchQuery(searchKey,
                        "goodsName"))
          /**
           * 第一个参数:当前页,0开始
           * 第二个参数:每个条数
           * 第三个参数:排序对象
           * 升序/降序
           * 比较字段
           */
             // .withPageable(PageRequest.of(0,5,Sort.Direction.DESC,"goodsId","marketPrice"))
             // 分页
             .withPageable(PageRequest.of(pageNum-1, pageSize))
             //
              .withSort(SortBuilders.fieldSort("marketPrice").order(SortOrder.ASC))
             //高亮,默认样式<em></em>(斜体)
             // .withHighlightFields(new HighlightBuilder.Field("goodsName"))
             //高亮,指定样式
              .withHighlightBuilder(new
                        HighlightBuilder().field("goodsName").preTags("<span style='color:red;'>").postTags(" </span>"))
                        .build();
             //返回结果
             List<Goods> result=new ArrayList<>();
        SearchHits<Goods> search = elasticsearchRestTemplate.search(query,
                Goods.class);
        for (SearchHit<Goods> searchHit : search) {
            Goods goods=null;
            String highlightMessage = searchHit.getHighlightField("goodsName").get(0);
            System.out.println("高亮-----------");
            System.out.println(highlightMessage);
            //结果对象
            goods=searchHit.getContent();
            goods.setGoogsNameH1(highlightMessage);
            goods.setGoodsNum((int) search.getTotalHits());
            System.out.println("数据:"+goods.getGoodsNum());
            result.add(goods);
        }
       return result;
    }

}

和jpa类似用法:

@Repository
public interface GoodsRepository extends ElasticsearchRepository<Goods, Integer> {

	List<Goods> findByGoodsName(String goodsName);


	@Query("{\"match\": {\"goodsId\":{ \"query\": \"?0\"}}}")
	Goods findByGoodsIdValue(Integer id);


}

spring boot整合dubbo(zookeeper注册中心)

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-xlWlHCSE-1610374348097)(springBoot整合redis,elasearch.assets/image-20210111204404318.png)]

公共接口作为jar宝存在,生产者消费者单独作为一个项目;

流程说明:

服务提供者启动时: 向 /dubbo/com.foo.BarService/providers 目录下写入自己的 URL 地 址

服务消费者启动时: 订阅 /dubbo/com.foo.BarService/providers 目录下的提供者 URL 地 址。并向 /dubbo/com.foo.BarService/consumers 目录下写入自己的 URL 地址

监控中心启动时: 订阅 /dubbo/com.foo.BarService 目录下的所有提供者和消费者 URL 地 址。

zookeeper集群(单机百度docker安装即可)

三台服务器分别为:192.168.10.100,192.168.10.101,192.168.10.102

解压

tar -zxvf apache-zookeeper-3.6.1-bin.tar.gz -C /usr/local/

创建数据目录data,日志目录log,数据目录data添加myid文件,myid 文件中

192.168.10.100:只写1

192.168.10.101:只写2

192.168.10.102:只写3

mkdir -p /usr/local/apache-zookeeper-3.6.1-bin/data
mkdir -p /usr/local/apache-zookeeper-3.6.1-bin/log
cd /usr/local/apache-zookeeper-3.6.1-bin/data
vim myid

复制zoo_sample.cfg,并将文件重命名为zoo.cfg

# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/usr/local/apache-zookeeper-3.6.1-bin/data
logDir=/usr/local/apache-zookeeper-3.6.1-bin/log
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
# 集群配置
# server.1 中的 1 是 myid 文件中的内容,2888 用于集群内部通信,3888用于选择leader
server.1=192.168.10.100:2888:3888
server.2=192.168.10.101:2888:3888
server.3=192.168.10.102:2888:3888

启动

cd /usr/local/
apache-zookeeper-3.6.1-bin/bin/zkServer.sh start

查看状态

apache-zookeeper-3.6.1-bin/bin/zkServer.sh status

父pom.xml

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
	<modelVersion>4.0.0</modelVersion>
	<packaging>pom</packaging>
	<modules>
		<module>dubbo-api</module>
		<module>dubbo-provider</module>
		<module>dubbo-consumer</module>
	</modules>
	<parent>
		<groupId>org.springframework.boot</groupId>
		<artifactId>spring-boot-starter-parent</artifactId>
		<version>2.3.5.RELEASE</version>
		<relativePath/> <!-- lookup parent from repository -->
	</parent>
	<groupId>com.xxxx</groupId>
	<artifactId>dubbo-demo</artifactId>
	<version>0.0.1-SNAPSHOT</version>
	<name>dubbo-demo</name>
	<description>Demo project for Spring Boot</description>

	<properties>
		<java.version>11</java.version>
		<dubbo.version>2.7.3</dubbo.version>
	</properties>

	<dependencyManagement>
		<dependencies>
			<!-- dubbo依赖 -->
			<dependency>
				<groupId>org.apache.dubbo</groupId>
				<artifactId>dubbo-spring-boot-starter</artifactId>
				<version>${dubbo.version}</version>
			</dependency>
			<!--zookeeper 注册中心客户端引入 curator客户端 -->
			<dependency>
				<groupId>org.apache.dubbo</groupId>
				<artifactId>dubbo-dependencies-zookeeper</artifactId>
				<version>${dubbo.version}</version>
				<type>pom</type>
			</dependency>
		</dependencies>
	</dependencyManagement>

</project>

api公共接口jar(普通java)

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
	<parent>
		<artifactId>dubbo-demo</artifactId>
		<groupId>com.xxxx</groupId>
		<version>0.0.1-SNAPSHOT</version>
	</parent>
	<modelVersion>4.0.0</modelVersion>

	<artifactId>dubbo-api</artifactId>

	<dependencies>
		<dependency>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-starter-web</artifactId>
		</dependency>
		<!-- dubbo依赖 -->
		<dependency>
			<groupId>org.apache.dubbo</groupId>
			<artifactId>dubbo-spring-boot-starter</artifactId>
		</dependency>
		<!--zookeeper 注册中心客户端引入 curator客户端 -->
		<dependency>
			<groupId>org.apache.dubbo</groupId>
			<artifactId>dubbo-dependencies-zookeeper</artifactId>
			<type>pom</type>
		</dependency>
	</dependencies>


</project>

接口

public interface MyService {
   List<Panda> getBySome(Panda panda);
}

pojo类

public class Panda implements Serializable {
    private Integer id;
    private String name;
    private String city;

    public Integer getId() {
        return id;
    }

    public void setId(Integer id) {
        this.id = id;
    }

    public String getName() {
        return name;
    }

    public void setName(String name) {
        this.name = name;
    }

    public String getCity() {
        return city;
    }

    public void setCity(String city) {
        this.city = city;
    }

    @Override
    public String toString() {
        return "Panda{" +
                "id=" + id +
                ", name='" + name + '\'' +
                ", city='" + city + '\'' +
                '}';
    }

    public Panda(Integer id, String name, String city) {
        this.id = id;
        this.name = name;
        this.city = city;
    }

    public Panda() {
    }
}

生产者

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
	<parent>
		<artifactId>dubbo-demo</artifactId>
		<groupId>com.xxxx</groupId>
		<version>0.0.1-SNAPSHOT</version>
	</parent>
	<modelVersion>4.0.0</modelVersion>

	<artifactId>dubbo-provider</artifactId>

	<dependencies>
		<dependency>
			<groupId>com.xxxx</groupId>
			<artifactId>dubbo-api</artifactId>
			<version>0.0.1-SNAPSHOT</version>
		</dependency>
	</dependencies>


</project>

application.yml

server:
  port: 8081

dubbo:
  application:
    # 应用名称
    name: provider
  registry:
    # 注册中心地址
    address: zookeeper://192.168.10.100:2181?backup=192.168.10.101:2181,192.168.10.102:2181
    # 超时时间,单位毫秒
    timeout: 6000
  metadata-report:
    address: zookeeper://192.168.10.100:2181
  protocol:
    #协议名称
    name: dubbo
    #协议端口
    port: 20880
  scan:
    #扫描包的位置
    base-packages: com.xxxx.provider.service

服务暴露:在com.xxxx.provider.service包下

import com.xxxx.api.pojo.Panda;
import com.xxxx.api.service.MyService;
import org.apache.dubbo.config.annotation.Service;
import java.util.ArrayList;
import java.util.List;
import java.util.stream.Collectors;

/**
 * @author lwf
 * @title: MyServiceImpl
 * @projectName dubbozk
 * @description: TODO
 * @date 2020/12/2920:13
 */
@Service(version = "1.0",interfaceClass = MyService.class)
public class MyServiceImpl implements MyService {
    @Override
    public List<Panda> getBySome(Panda panda) {
         List<Panda> pandas=new ArrayList<>();

            pandas.add(new Panda(1, "雪宝", "上海"));
            pandas.add(new Panda(2, "雪宝1", "上海"));
            pandas.add(new Panda(3, "金虎", "大连"));
            pandas.add(new Panda(4, "熊猫", "成都"));
            pandas.add(new Panda(5, "熊猫", "成都"));
            pandas.add(new Panda(6, "熊猫", "成都"));
            pandas.add(new Panda(7, "绩效", "重庆"));
            pandas.add(new Panda(8, "小熊猫", "重庆"));

        List<Panda> list=null;
        if(panda.getId()!=null){
            //编号大于传入id的熊猫
            list=pandas.parallelStream().filter(a->{
                return panda.getId()<a.getId();
            }).collect(Collectors.toList());
        }
        if(panda.getCity()!=null){
            //城市
            list=list.stream().filter(e->{
                return e.getCity().equals(panda.getCity());
            }).collect(Collectors.toList());
        }
        return list;
    }
}

消费者

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
	<parent>
		<artifactId>dubbo-demo</artifactId>
		<groupId>com.xxxx</groupId>
		<version>0.0.1-SNAPSHOT</version>
	</parent>
	<modelVersion>4.0.0</modelVersion>

	<artifactId>dubbo-consumer</artifactId>

	<dependencies>
		<dependency>
			<groupId>com.xxxx</groupId>
			<artifactId>dubbo-api</artifactId>
			<version>0.0.1-SNAPSHOT</version>
		</dependency>
	</dependencies>


</project>

消费服务

import com.xxxx.api.pojo.Panda;
import com.xxxx.api.service.MyService;
import com.xxxx.api.service.UserService;
import org.apache.dubbo.config.annotation.Reference;
import org.springframework.boot.CommandLineRunner;
import org.springframework.stereotype.Component;

@Component
public class UserInit implements CommandLineRunner {
	@Reference(version = "1.0", parameters = {"unicast", "false"})
	private MyService myService;

	@Override
	public void run(String... args) throws Exception {
		myService.getBySome(new Panda(2,null,null)).forEach(System.out::println);
	}
}

注意:

pojo类必须实现序列化接口

version可以指定调用哪个版本;

消费成功

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-fjDzzm0R-1610374348099)(springBoot整合redis,elasearch.assets/image-20210111212138922.png)]

复制生产者,两个生产者被消费

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-KUu0SnZm-1610374348100)(springBoot整合redis,elasearch.assets/image-20210111220916487.png)]

生产者消费另外的生产者

在这里插入图片描述

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值