Springboot整合Elasticsearch搜索引擎+vue页面

这里我用的是Elasticsearch 6.2.1,logstash 6.2.1,mysql

一.ElasticSearch:

一.介绍

ElasticSearch是一个基于Lucene的搜索服务器。它提供了一个分布式多用户能力的全文搜索引擎,基于RESTful web接口。Elasticsearch是用Java语言开发的,并作为Apache许可条款下的开放源码发布,是一种流行的企业级搜索引擎。ElasticSearch用于云计算中,能够达到实时搜索,稳定,可靠,快速,安装使用方便。官方客户端在Java、.NET(C#)、PHP、Python、Apache Groovy、Ruby和许多其他语言中都是可用的。根据DB-Engines的排名显示,Elasticsearch是最受欢迎的企业搜索引擎,其次是Apache Solr,也是基于Lucene。

优点:
1、elasticsearch是基于Lucene的高扩展的分布式搜索服务器,支持开箱即用。
2、elasticsearch隐藏了Lucene的复杂性,对外提供Restful 接口来操作索引,搜索。
3、扩展性好,可部署上百台服务器集群,处理PB级数据。
4、近实时的去索引数据、搜索数据。
下面就来说说springboot整合es。

安装:(jdk1.8以上)
点击安装es

在这里插入图片描述
bin:脚本目录,包括:启动、停止等可执行脚本
config:配置文件目录
data:索引目录,存放索引文件的地方
logs:日志目录
modules:模块目录,包括了es的功能模块
plugins :插件目录,es支持插件机制

二.配置一下config下的三个文件,启动elasticsearch:(如果配置不成功,可能会启动elasticsearch.bat)

elasticsearch.yml : 用于配置Elasticsearch运行参数
jvm.options : 用于配置Elasticsearch JVM设置
log4j2.properties: 用于配置Elasticsearch日志

1.elasticsearch.yml

cluster.name: (配置elasticsearch的集群名称,默认是elasticsearch。建议修改成一个有意义的名称。 node.name:)
node.name: (节点名,通常一台物理服务器就是一个节点,es会默认随机指定一个名字,建议指定一个有意义的名称,方便管 理)
network.host: 0.0.0.0 
http.port: 9200(设置对外服务的http端口,默认为9200。)
transport.tcp.port: 9300 ( 集群结点之间通信端口)
node.master: true 
node.data: true 
#discovery.zen.ping.unicast.hosts: ["0.0.0.0:9300", "0.0.0.0:9301", "0.0.0.0:9302"](设置集群中master节点的初始列表)
discovery.zen.minimum_master_nodes: 1(主结点数量的最少值 ,此值的公式为:(master_eligible_nodes / 2) + 1 ,比如:有3个符合要求的主结点,那么这 里要设置为2。) 
node.ingest: true
bootstrap.memory_lock: false 
node.max_local_storage_nodes: 1(单机允许的最大存储结点数,通常单机启动一个结点建议设置为1,开发环境如果单机启动多个节点可设置大于1.) 
path.data: D:\ElasticSearch\elasticsearch-3\data 
path.logs: D:\ElasticSearch\elasticsearch-3\logs 
http.cors.enabled: true
http.cors.allow-origin: /.*/

2.jvm.options

## JVM configuration

################################################################
## IMPORTANT: JVM heap size
################################################################
##
## You should always set the min and max JVM heap
## size to the same value. For example, to set
## the heap to 4 GB, set:
##
## -Xms4g
## -Xmx4g
##
## See https://www.elastic.co/guide/en/elasticsearch/reference/current/heap-size.html
## for more information
##
################################################################

# Xms represents the initial size of total heap space
# Xmx represents the maximum size of total heap space

-Xms1g
-Xmx1g

################################################################
## Expert settings
################################################################
##
## All settings below this section are considered
## expert settings. Don't tamper with them unless
## you understand what you are doing
##
################################################################

## GC configuration
-XX:+UseConcMarkSweepGC
-XX:CMSInitiatingOccupancyFraction=75
-XX:+UseCMSInitiatingOccupancyOnly

## optimizations

# pre-touch memory pages used by the JVM during initialization
-XX:+AlwaysPreTouch

## basic

# explicitly set the stack size
-Xss1m

# set to headless, just in case
-Djava.awt.headless=true

# ensure UTF-8 encoding by default (e.g. filenames)
-Dfile.encoding=UTF-8

# use our provided JNA always versus the system one
-Djna.nosys=true

# turn off a JDK optimization that throws away stack traces for common
# exceptions because stack traces are important for debugging
-XX:-OmitStackTraceInFastThrow

# flags to configure Netty
-Dio.netty.noUnsafe=true
-Dio.netty.noKeySetOptimization=true
-Dio.netty.recycler.maxCapacityPerThread=0

# log4j 2
-Dlog4j.shutdownHookEnabled=false
-Dlog4j2.disable.jmx=true

-Djava.io.tmpdir=${ES_TMPDIR}

## heap dumps

# generate a heap dump when an allocation from the Java heap fails
# heap dumps are created in the working directory of the JVM
-XX:+HeapDumpOnOutOfMemoryError

# specify an alternative path for heap dumps
# ensure the directory exists and has sufficient space
#-XX:HeapDumpPath=/heap/dump/path

## JDK 8 GC logging

8:-XX:+PrintGCDetails
8:-XX:+PrintGCDateStamps
8:-XX:+PrintTenuringDistribution
8:-XX:+PrintGCApplicationStoppedTime
8:-Xloggc:logs/gc.log
8:-XX:+UseGCLogFileRotation
8:-XX:NumberOfGCLogFiles=32
8:-XX:GCLogFileSize=64m

# JDK 9+ GC logging
9-:-Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m
# due to internationalization enhancements in JDK 9 Elasticsearch need to set the provider to COMPAT otherwise
# time/date parsing will break in an incompatible way for some date patterns and locals
9-:-Djava.locale.providers=COMPAT

3. log4j2.properties

status = error

# log action execution errors for easier debugging
logger.action.name = org.elasticsearch.action
logger.action.level = debug

appender.console.type = Console
appender.console.name = console
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] %marker%m%n

appender.rolling.type = RollingFile
appender.rolling.name = rolling
appender.rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}.log
appender.rolling.layout.type = PatternLayout
appender.rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] %marker%.-10000m%n
appender.rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}-%d{yyyy-MM-dd}-%i.log.gz
appender.rolling.policies.type = Policies
appender.rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.rolling.policies.time.interval = 1
appender.rolling.policies.time.modulate = true
appender.rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.rolling.policies.size.size = 128MB
appender.rolling.strategy.type = DefaultRolloverStrategy
appender.rolling.strategy.fileIndex = nomax
appender.rolling.strategy.action.type = Delete
appender.rolling.strategy.action.basepath = ${sys:es.logs.base_path}
appender.rolling.strategy.action.condition.type = IfFileName
appender.rolling.strategy.action.condition.glob = ${sys:es.logs.cluster_name}-*
appender.rolling.strategy.action.condition.nested_condition.type = IfAccumulatedFileSize
appender.rolling.strategy.action.condition.nested_condition.exceeds = 2GB

rootLogger.level = info
rootLogger.appenderRef.console.ref = console
rootLogger.appenderRef.rolling.ref = rolling

appender.deprecation_rolling.type = RollingFile
appender.deprecation_rolling.name = deprecation_rolling
appender.deprecation_rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_deprecation.log
appender.deprecation_rolling.layout.type = PatternLayout
appender.deprecation_rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] %marker%.-10000m%n
appender.deprecation_rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_deprecation-%i.log.gz
appender.deprecation_rolling.policies.type = Policies
appender.deprecation_rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.deprecation_rolling.policies.size.size = 1GB
appender.deprecation_rolling.strategy.type = DefaultRolloverStrategy
appender.deprecation_rolling.strategy.max = 4

logger.deprecation.name = org.elasticsearch.deprecation
logger.deprecation.level = warn
logger.deprecation.appenderRef.deprecation_rolling.ref = deprecation_rolling
logger.deprecation.additivity = false

appender.index_search_slowlog_rolling.type = RollingFile
appender.index_search_slowlog_rolling.name = index_search_slowlog_rolling
appender.index_search_slowlog_rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_index_search_slowlog.log
appender.index_search_slowlog_rolling.layout.type = PatternLayout
appender.index_search_slowlog_rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c] %marker%.-10000m%n
appender.index_search_slowlog_rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_index_search_slowlog-%d{yyyy-MM-dd}.log
appender.index_search_slowlog_rolling.policies.type = Policies
appender.index_search_slowlog_rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.index_search_slowlog_rolling.policies.time.interval = 1
appender.index_search_slowlog_rolling.policies.time.modulate = true

logger.index_search_slowlog_rolling.name = index.search.slowlog
logger.index_search_slowlog_rolling.level = trace
logger.index_search_slowlog_rolling.appenderRef.index_search_slowlog_rolling.ref = index_search_slowlog_rolling
logger.index_search_slowlog_rolling.additivity = false

appender.index_indexing_slowlog_rolling.type = RollingFile
appender.index_indexing_slowlog_rolling.name = index_indexing_slowlog_rolling
appender.index_indexing_slowlog_rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_index_indexing_slowlog.log
appender.index_indexing_slowlog_rolling.layout.type = PatternLayout
appender.index_indexing_slowlog_rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c] %marker%.-10000m%n
appender.index_indexing_slowlog_rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_index_indexing_slowlog-%d{yyyy-MM-dd}.log
appender.index_indexing_slowlog_rolling.policies.type = Policies
appender.index_indexing_slowlog_rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.index_indexing_slowlog_rolling.policies.time.interval = 1
appender.index_indexing_slowlog_rolling.policies.time.modulate = true

logger.index_indexing_slowlog.name = index.indexing.slowlog.index
logger.index_indexing_slowlog.level = trace
logger.index_indexing_slowlog.appenderRef.index_indexing_slowlog_rolling.ref = index_indexing_slowlog_rolling
logger.index_indexing_slowlog.additivity = false

4.启动elasticsearch:

进入bin目录,在cmd下运行:elasticsearch.bat在这里插入图片描述
验证是否成功:
在这里插入图片描述

三.安装head页面可视化插件

head插件是ES的一个可视化管理插件,用来监视ES的状态,并通过head客户端和ES服务进行交互,比如创建映 射、创建索引等,head的项目地址在https://github.com/mobz/elasticsearch-head
从ES6.0开始,head插件支持使得node.js运行(安装node.js)。
下好后把它和elasticsearch放在一起,在根目录cmd,输入指令npm run start
在这里插入图片描述
在这里插入图片描述
点击索引找到新建索引就能新建索引,分片数取决于你的数据有多少,一片大概可以存1万条数据,副本数可以设0。

四. 安装ik分词器

1.安装地址:https://github.com/medcl/elasticsearch-analysis-ik,
下载好后解压,并将解压的文件拷贝到ES安装目录的plugins下的ik目录下

二.安装Logstash以及相关所需和使用

一.安装logstash
Logstash是ES下的一款开源软件,它能够同时 从多个来源采集数据、转换数据,然后将数据发送到Eleasticsearch 中创建索引。 本项目使用Logstash将MySQL中的数据采用到ES索引中。下载logstah
需要注意的是,这个版本需要和你的es版本一致。

二.安装logstash-input-jdbc
logstash-input-jdbc 是ruby开发的,先下载ruby并安装 下载地址: https://rubyinstaller.org/downloads/
Logstash5.x以上版本本身自带有logstash-input-jdbc,6.x版本本身不带logstash-input-jdbc插件,需要手动安装

.\logstash-plugin.bat install logstash-input-jdbc

三.配置创建索引的json和mysql.conf
1.mysql.conf

input {
  stdin {
  }
  jdbc {
  jdbc_connection_string => "jdbc:mysql://localhost:3306/database?useUnicode=true&characterEncoding=utf-8&useSSL=true&serverTimezone=UTC"
  # the user we wish to excute our statement as
  jdbc_user => "root"
  jdbc_password => root
  # the path to our downloaded jdbc driver  
  jdbc_driver_library => "D:/Maven/mysql/mysql-connector-java/5.1.41/mysql-connector-java-5.1.41.jar"
  # the name of the driver class for mysql
  jdbc_driver_class => "com.mysql.jdbc.Driver"
  jdbc_paging_enabled => "true"
  jdbc_page_size => "50000"
  #要执行的sql文件
  #statement_filepath => "/conf/course.sql"
  statement => "select * from wp_ex_source_goods_tb_cat_copy where timestamp > date_add(:sql_last_value,INTERVAL 8 HOUR)"
  #定时配置
  schedule => "* * * * *"
  record_last_run => true
  last_run_metadata_path => "D:/ElasticSearch/logstash-6.2.1/config/logstash_metadata"
  }
}


output {
  elasticsearch {
  #ES的ip地址和端口
  hosts => "localhost:9200"
  #hosts => ["localhost:9200","localhost:9202","localhost:9203"]
  #ES索引库名称
  index => "goods"
  document_id => "%{cid}"
  document_type => "doc"
  template =>"D:/ElasticSearch/logstash-6.2.1/config/goods_template.json"
  template_name =>"goods"
  template_overwrite =>"true"
  }
  stdout {
 #日志输出
  codec => json_lines
  }
}

2.goods_template.json (名字是用的我的)

{
   "mappings" : {
      "doc" : {         
	"properties":
	{ 
		"cid": { 
			"type": "text" 
		},
		"name": {
			"type": "keyword"
		},
		"is_parent": {
			"type": "text"
		},
		"parent_id": {
			"type": "text"
		},
		"level":{
			"type": "text"
		},
		"pathid": {
			"type": "text"
		}
		,
		"path": {
			"type": "text"
		}				
	 }
    }     
   },
   "template" : "goods"
}

三.启动logstash拉取mysql数据到es:

启动logstash.bat:

.\logstash.bat ‐f ..\config\mysql.conf

logstash是根据数据库的timestamp来进行实时抓取的,比较config的logstash_metadata文件的时间来抓取
在这里插入图片描述

三.Springboot整合es:

核心文件:

pom.xml: 注意: es的server和client的版本要一致,否则会出现请求响应不了,空指针异常

 <dependency>
            <groupId>org.elasticsearch.client</groupId>
            <artifactId>elasticsearch-rest-high-level-client</artifactId>
            <version>6.5.4</version>
        </dependency>
        <dependency>
            <groupId>org.elasticsearch</groupId>
            <artifactId>elasticsearch</artifactId>
            <version>6.5.4</version>
        </dependency>
        <dependency>
            <groupId>org.projectlombok</groupId>
            <artifactId>lombok</artifactId>
            <optional>true</optional>
        </dependency>
        <dependency>
            <groupId>org.apache.commons</groupId>
            <artifactId>commons-io</artifactId>
        </dependency>
        <dependency>
            <groupId>org.apache.commons</groupId>
            <artifactId>commons-lang3</artifactId>
        </dependency>

application.yml

server:
  port: 8083
spring:
  application:
    name: sbes
cat: #随便取
  elasticsearch:
    hostlist: ${eshostlist:127.0.0.1:9200}
    es:
      index: goods
      type: doc
      source_field: cid,name,is_parent,parent_id,level,pathid,path,timestamp

ElasticsearchConfig.java

/**
 * @author Administrator
 * @version 1.0
 **/
@Configuration
public class ElasticsearchConfig {

    @Value("${cat.elasticsearch.hostlist}")
    private String hostlist;

    @Bean
    public RestHighLevelClient restHighLevelClient(){
        //解析hostlist配置信息
        String[] split = hostlist.split(",");
        //创建HttpHost数组,其中存放es主机和端口的配置信息
        HttpHost[] httpHostArray = new HttpHost[split.length];
        for(int i=0;i<split.length;i++){
            String item = split[i];
            httpHostArray[i] = new HttpHost(item.split(":")[0], Integer.parseInt(item.split(":")[1]), "http");
        }
        //创建RestHighLevelClient客户端
        return new RestHighLevelClient(RestClient.builder(httpHostArray));
    }

    //项目主要使用RestHighLevelClient,对于低级的客户端暂时不用
    @Bean
    public RestClient restClient(){
        //解析hostlist配置信息
        String[] split = hostlist.split(",");
        //创建HttpHost数组,其中存放es主机和端口的配置信息
        HttpHost[] httpHostArray = new HttpHost[split.length];
        for(int i=0;i<split.length;i++){
            String item = split[i];
            httpHostArray[i] = new HttpHost(item.split(":")[0], Integer.parseInt(item.split(":")[1]), "http");
        }
        return RestClient.builder(httpHostArray).build();
    }
}

EsCatService.java :
注意:如果分页的size超过1w就会报错,所以需要在es的head做一个简单操作
(9200是es的uip,goods是索引名)在这里插入图片描述

@Service
public class EsCatService {
    @Value("${cat.elasticsearch.es.index}")
    private String es_index;
    @Value("${cat.elasticsearch.es.type}")
    private String es_type;
    @Value("${cat.elasticsearch.es.source_field}")
    private String source_field;
    @Autowired
    RestHighLevelClient restHighLevelClient;

    public QueryResponseResult<TbCatCopy> list(int page, int size, String keyword) throws IOException {
        //创建搜索请求对象
        SearchRequest searchRequest=new SearchRequest(es_index);
        //设置搜索类型
        searchRequest.types(es_type);
        SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
        BoolQueryBuilder boolQueryBuilder = QueryBuilders.boolQuery();
        //过滤字段源
        String[]source_fields = source_field.split(",");
        searchSourceBuilder.fetchSource(source_fields, new String[]{});
        //关键字
        if(StringUtils.isNotEmpty(keyword)) {
            //匹配关键字
            MultiMatchQueryBuilder multiMatchQueryBuilder = QueryBuilders.multiMatchQuery(keyword, "name");
//            //设置匹配占比
//            multiMatchQueryBuilder.minimumShouldMatch("70%");
//            //向搜索请求设置搜索源
//            //提升另个字段的Boost值
//            multiMatchQueryBuilder.field("name",10);
           boolQueryBuilder.must(multiMatchQueryBuilder);
        }
        //过虑
        //布尔查询
        searchSourceBuilder.query(boolQueryBuilder);
        if(page<=0){
            page = 1;
        }
        if(size<=0){
            size=10;
        }
        //起始记录下标
        int from = (page-1) * size;
        searchSourceBuilder.from(from);
        searchSourceBuilder.size(size);
        searchRequest.source(searchSourceBuilder);
        QueryResult<TbCatCopy> queryResult=new QueryResult();
        //数据列表
        List<TbCatCopy> list = new ArrayList<>();
        //执行搜索
        SearchResponse searchResponse=restHighLevelClient.search(searchRequest);
        //获取响应结果
        SearchHits hits = searchResponse.getHits();
        //记录总数
        long totalHits = hits.getTotalHits();
        queryResult.setTotal(totalHits);
        //结果集处理
        SearchHit[] searchHits = hits.getHits();
        for (SearchHit hit:searchHits
        ) {
            TbCatCopy tbCatCopy=new TbCatCopy();
            Map<String,Object> sourceAsMap=hit.getSourceAsMap();
            Integer cid=(Integer) sourceAsMap.get("cid");
            tbCatCopy.setCid(cid);
            String name = (String) sourceAsMap.get("name");
            tbCatCopy.setName(name);
            Integer is_parent=(Integer) sourceAsMap.get("is_parent");
            tbCatCopy.setIs_parent(is_parent);
            Integer parent_id=(Integer) sourceAsMap.get("parent_id");
            tbCatCopy.setParent_id(parent_id);
            Integer level=(Integer) sourceAsMap.get("level");
            tbCatCopy.setLevel(level);
            String pathid=(String) sourceAsMap.get("pathid");
            tbCatCopy.setPathid(pathid);
            String path = (String) sourceAsMap.get("path");
            tbCatCopy.setPath(path);
            list.add(tbCatCopy);
        }
        queryResult.setList(list);
        QueryResponseResult<TbCatCopy> coursePubQueryResponseResult = new QueryResponseResult<TbCatCopy>(CommonCode.SUCCESS,queryResult);
        return coursePubQueryResponseResult;
    }
}
es之java各种查询操作

各种查询操作详情

matchAllQuery匹配所有文档
queryStringQuery基于Lucene的字段检索
wildcardQuery通配符查询匹配多个字符,?匹配1个字符*
termQuery词条查询
matchQuery字段查询
idsQuery标识符查询
fuzzyQuery文档相似度查询
includeLower includeUpper范围查询
boolQuery组合查询(复杂查询)
SortOrder排序查询

四.vue.js渲染页面

不会vue.js的可以看看我之前的三级联动 (springboot+vue.js/ajax+mysql+SpringDataJpa/Mybatis),里面用到vue.js,也有ajax的方法。

1.proxytable配置:

 proxyTable: {
     '/cat': {
        // 测试环境
        target: 'http://localhost:8083', // 接口域名
        changeOrigin: true, //是否跨域
        pathRewrite: {
          '^/cat': '' //需要rewrite重写的,
        }
      }
      }

2.vue代码

<template>
  <div>
    <div class="usertable">
      <el-row class="usersavebtn">
        </el-button>
        <el-input v-model="input" placeholder="请输入内容" style="width:20%"></el-input><el-button type="primary" icon="el-icon-search" @click="init()">搜索</el-button>
      </el-row>
      <el-table :data="tableData" stripe style="width: 100%">
        <el-table-column prop="cid" label="编号" width="150"></el-table-column>
        <el-table-column prop="name" label="商品名" width="200"></el-table-column>
        <el-table-column prop="level" label="等级" width="150"></el-table-column>
        <el-table-column prop="pathid" label="类别编号" width="200"></el-table-column>
        <el-table-column prop="path" label="类别" width="350"></el-table-column>
      </el-table>
      <el-pagination
        background
        layout="prev, pager, next"
        :total="total"
        :page-size="params.size"
        :current-page="params.page"
        @current-change="changePage"
        style="float: right">
      </el-pagination>
    </div>
  </div>
</template>

<script>
  const axios = require('axios')
  export default {
    name: 'catpage',
    data () {
      return {
        tableData: [],
        dialogFormVisible: false,
        formLabelWidth: '120px',
        total:0,
        params:{
          page:0,
          size:10,
        },
        input:''
        ,
// 表单验证规则
      }
    },
    created: function() {
      this.init()
    },
    methods: {
      changePage:function (page) {
        this.params.page=page;
         this.init();
      }
     ,
      init: function (h) {
        var app = this
        var a = h == undefined ? app : h
        var keyword=a.input
        var pageNum=a.params.page
        var pageSize=a.params.size
        console.log('init')
        axios
          .get('/cat/list',{
            params: {
              'page':pageNum,
              'size':pageSize,
              'keyword':keyword,
            }
          })
          .then(function (response) {// handle success
            console.log('============', response)
            a.tableData = response.data.content
            a.tableData=response.data.queryResult.list;//将调用方法所返回的数据结果赋值给list
            a.total=response.data.queryResult.total;//同上
          })
          .catch(function (error) {// handle error
            console.log(error)
          })
      }
    }
  }
</script>

<style scoped>
  .usertable {
    margin: 0 auto;
    width: 70%;
    position: relative;
  }
</style>

3.实现效果:
在这里插入图片描述

  • 6
    点赞
  • 26
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值