图解Elasticsearch中的_source、_all、store和index属性

本文通过图解方式,深入解析Elasticsearch中的_source、_all、store和index属性,包括它们的作用、配置方法及如何影响搜索和高亮显示。

Elasticsearch中有几个关键属性容易混淆,很多人搞不清楚_source字段里存储的是什么?store属性的true或false和_source字段有什么关系?store属性设置为true和_all有什么关系?index属性又起到什么作用?什么时候设置store属性为true?什么时候应该开启_all字段?本文通过图解的方式,深入理解Elasticsearch中的_source、_all、store和index属性。

这里写图片描述

图1 Elasticsearch中的_source、_all、store和index属性解析

图1所示, 第二象限是一份原始文档,有title和content2个字段,字段取值分别为”我是中国人”和” 热爱共产党”,这一点没什么可解释的。我们把原始文档写入Elasticsearch,默认情况下,Elasticsearch里面有2份内容,一份是原始文档,也就是_source字段里的内容,我们在Elasticsearch中搜索文档,查看的文档内容就是_source中的内容,如图2,相信大家一定非常熟悉这个界面。
这里写图片描述
图2 _source字段举例

另一份是倒排索引,倒排索引中的数据结构是倒排记录表,记录了词项和文档之间的对应关系,比如关键词”中国人”包含在文档ID为1的文档中,倒排记录表中存储的就是这种对应关系,当然也包括词频等更多信息。Elasticsearch底层用的是Lucene的API,Elasticsearch之所以能完成全文搜索的功能就是因为存储的有倒排索引。如果把倒排索引拿掉,Elasticsearch是不是和mongoDB很像?

那么文档索引到Elasticsearch的时候,默认情况下是对所有字段创建倒排索引的(动态mapping解析出来为数字类型、布尔类型的字段除外),某个字段是否生成倒排索引是由字段的index属性控制的,在Elasticsearch 5之前,index属性的取值有三个:

  1. analyzed:字段被索引,会做分词,可搜索。反过来,如果需要根据某个字段进搜索,index属性就应该设置为analyzed。
  2. not_analyzed:字段值不分词,会被原样写入索引。反过来,如果某些字段需要完全匹配,比如人名、地名,index属性设置为not_analyzed为佳。
  3. no:字段不写入索引,当然也就不能搜索。反过来,有些业务要求某些字段不能被搜索,那么index属性设置为no即可。

再说_all字段,顾名思义,_all字段里面包含了一个文档里面的所有信息,是一个超级字段。以图中的文档为例,如果开启_all字段,那么title+content会组成一个超级字段,这个字段包含了其他字段的所有内容,当然也可以设置只存储某几个字段到_all属性里面或者排除某些字段。

回到图一的第一象限,用户输入关键词" 中国人",分词以后,Elasticsearch从倒排记录表中查找哪些文档包含词项"中国人 ",注意变化,分词之前" 中国人"是用户查询(query),分词之后在倒排索引中" 中国人"是词项(term)。Elasticsearch根据文档ID(通常是文档ID的集合)返回文档内容给用户,如图一第四象限所示。

通常情况下,对于用户查询的关键字要做高亮处理,如图3所示:
这里写图片描述

图3 搜索引擎中的关键字高亮

关键字高亮实质上是根据倒排记录中的词项偏移位置,找到关键词,加上前端的高亮代码。这里就要说到store属性,store属性用于指定是否将原始字段写入索引,默认取值为no。如果在Lucene中,高亮功能和store属性是否存储息息相关,因为需要根据偏移位置到原始文档中找到关键字才能加上高亮的片段。在Elasticsearch,因为_source中已经存储了一份原始文档,可以根据_source中的原始文档实现高亮,在索引中再存储原始文档就多余了,所以Elasticsearch默认是把store属性设置为no。

注意:如果想要对某个字段实现高亮功能,_source和store至少保留一个。下面会给出测试代码。

至此,文章开头提出的几个问题都给出了答案。下面给出这几个字段常用配置的代码。

一、_source配置

_source字段默认是存储的, 什么情况下不用保留_source字段?如果某个字段内容非常多,业务里面只需要能对该字段进行搜索,最后返回文档id,查看文档内容会再次到mysql或者hbase中取数据,把大字段的内容存在Elasticsearch中只会增大索引,这一点文档数量越大结果越明显,如果一条文档节省几KB,放大到亿万级的量结果也是非常可观的。
如果想要关闭_source字段,在mapping中的设置如下:

{
    "yourtype":{
        "_source":{
            "enabled":false
        },
        "properties": {
            ... 
        }
    }
}

如果只想存储某几个字段的原始值到Elasticsearch,可以通过incudes参数来设置,在mapping中的设置如下:

{
    "yourtype":{
        "_source":{
            "includes":["field1","field2"]
        },
        "properties": {
            ... 
        }
    }
}

同样,可以通过excludes参数排除某些字段:

{
    "yourtype":{
        "_source":{
            "excludes":["field1","field2"]
        },
        "properties": {
            ... 
        }
    }
}

测试,首先创建一个索引

PUT test

设置mapping,禁用_source:

PUT test/test/_mapping
{
   "test": {
      "_source": {
         "enabled": false
      }
   }
}

写入一条文档:

POST test/test/1
{
    "title":"我是中国人",
    "content":"热爱共产党"
}

搜索关键词”中国人”:

GET test/_search
{
    "query": {
        "match": {
           "title": "中国人"
        }
    }
}
{
   "took": 9,
   "timed_out": false,
   "_shards": {
      "total": 5,
      "successful": 5,
      "failed": 0
   },
   "hits": {
      "total": 1,
      "max_score": 0.30685282,
      "hits": [
         {
            "_index": "test",
            "_type": "test",
            "_id": "1",
            "_score": 0.30685282
         }
      ]
   }
}

从返回结果中可以看到,搜到了一条文档,但是禁用_source以后查询结果中不会再返回文档原始内容。(注,测试基于ELasticsearch 2.3.3,配置文件中已默认指定ik分词。)

二、_all配置

_all字段默认是关闭的,如果要开启_all字段,索引增大是不言而喻的。_all字段开启适用于不指定搜索某一个字段,根据关键词,搜索整个文档内容。
开启_all字段的方法和_source类似,mapping中的配置如下:

{
   "yourtype": {
      "_all": {
         "enabled": true
      },
      "properties": {
            ... 
      }
   }
}

也可以通过在字段中指定某个字段是否包含在_all中:

{
   "yourtype": {
      "properties": {
         "field1": {
             "type": "string",
             "include_in_all": false
          },
          "field2": {
             "type": "string",
             "include_in_all": true
          }
      }
   }
}

如果要把字段原始值保存,要设置store属性为true,这样索引会更大,需要根据需求使用。下面给出测试代码。
创建test索引:

DELETE  test
PUT test

设置mapping,这里设置所有字段都保存在_all中并且存储原始值:

PUT test/test/_mapping
{
   "test": {
      "_all": {
         "enabled": true,
         "store": true
      }
   }
}

插入文档:

POST test/test/1
{
    "title":"我是中国人",
    "content":"热爱共产党"
}

对_all字段进行搜索并高亮:

POST test/_search
{
   "fields": ["_all"], 
   "query": {
      "match": {
         "_all": "中国人"
      }
   },
   "highlight": {
      "fields": {
         "_all": {}
      }
   }
}
{
   "took": 3,
   "timed_out": false,
   "_shards": {
      "total": 5,
      "successful": 5,
      "failed": 0
   },
   "hits": {
      "total": 1,
      "max_score": 0.15342641,
      "hits": [
         {
            "_index": "test",
            "_type": "test",
            "_id": "1",
            "_score": 0.15342641,
            "_all": "我是中国人 热爱共产党 ",
            "highlight": {
               "_all": [
                  "我是<em>中国人</em> 热爱共产党 "
               ]
            }
         }
      ]
   }
}

Elasticsearch中的query_string和simple_query_string默认就是查询_all字段,示例如下:

GET test/_search
{
    "query": {
        "query_string": {
           "query": "共产党"
        }
    }
}

三、index和score配置

index和store属性实在字段内进行设置的,下面给出一个例子,设置test索引不保存_source,title字段索引但不分析,字段原始值写入索引,content字段为默认属性,代码如下:

DELETE  test
PUT test
PUT test/test/_mapping
{
   "test": {
      "_source": {
         "enabled": false
      },
      "properties": {
         "title": {
            "type": "string",
            "index": "not_analyzed",
            "store": "true"
         },
         "content": {
            "type": "string"
         }
      }
   }
}

对title字段进行搜索并高亮,代码如下:

GET test/_search
{
    "query": {
        "match": {
           "title": "我是中国人"
        }
    },
   "highlight": {
      "fields": {
         "title": {}
      }
   }
}
{
   "took": 6,
   "timed_out": false,
   "_shards": {
      "total": 5,
      "successful": 5,
      "failed": 0
   },
   "hits": {
      "total": 1,
      "max_score": 0.30685282,
      "hits": [
         {
            "_index": "test",
            "_type": "test",
            "_id": "1",
            "_score": 0.30685282,
            "highlight": {
               "title": [
                  "<em>我是中国人</em>"
               ]
            }
         }
      ]
   }
}

从返回结果中可以看到,虽然没有保存title字段到_source, 但是依然可以实现搜索高亮。

四、总结

通过图解和代码测试,对Elasticsearch中的_source、_all、store和index进行了详解,相信很容易明白。错误和疏漏之处,欢迎批评指正。

我想直到这个程序在初始拓扑计算系统初始全局风险过程中,所需的 path_id、probability cal_edges_impact 数据来源获取方式# 需要读进来图的拓扑,总的来说形成两个实验,实验1是删除边后不更新,实验二是每次删除边后都要更新一波 # 实验一 输入:待删除边的list,所有边的排名,CIA影响值(直接用前面得到的即可),exp值好像需要查表 import copy import math import json import csv import os import sys from lib import get_allpath as method_get_all_path from datetime import datetime from itertools import permutations import numpy as np import argparse def exp_normalize(exp): min_value, max_value = 0.121090464, 3.88 for i in range(len(exp)): exp[i] = (exp[i] - min_value) / (max_value - min_value) #避免exp归一化为0,导致概率为0 return exp # def get_path(): # Nodes = None # edges = None # edgesDict = {} # Numedges = {} # edgesNum = {} # with open('../input/nodes.json', 'r') as f: # Nodes = json.load(f) # nodes = Nodes['nodes'] # start = Nodes['start'] # end = Nodes['end'] # with open('../input/edges.json', 'r') as f: # edges = json.load(f) # edges = edges['edges'] # for edge in edges: # if edgesDict.get(edge[0]) == None: # edgesDict[edge[0]] = [edge[1]] # else: # edgesDict[edge[0]].append(edge[1]) # edgesDict[end] = [] # myPath = method_get_all_path.GetPath(nodes, edgesDict, start, end) # Path = myPath.getPath() # 存放全部攻击路径 # edgePath = myPath.tran(Path) # return edgePath def get_path(topo_path): with open(topo_path, 'r') as f: Topo = json.load(f) f.close() start = Topo['start'] end = Topo['goal'][0]['host'] nodes = Topo['hosts'] edges = Topo['edges'] newnodes = [] newedges = [] exp=[] tempedges={} for node in nodes: newnodes.append(node['host_name']) for edge in edges: path = [edge['source'], edge['target']] newedges.append(path) exp.append(edge['val']['ES']) return start, end, newnodes, newedges,exp[1:] def get_path1(start, end, nodes, edges): edgesDict = {} Numedges = {} edgesNum = {} for edge in edges: if edgesDict.get(edge[0]) == None: edgesDict[edge[0]] = [edge[1]] else: edgesDict[edge[0]].append(edge[1]) edgesDict[end] = [] myPath = method_get_all_path.GetPath(nodes, edgesDict, start, end) Path = myPath.getPath() # 存放全部攻击路径 edgePath = myPath.tran(Path) return edgePath def get_data(): with open("result/rank.json", 'r') as f: result = json.load(f) f.close() rank=result['ranking'] # if top_n < (len(rank)-1): delete_edges=[] for i in range(len(rank)-1): delete_edges.append(rank[i][0]) ranking=[0] * (len(rank)-1) for index, item in enumerate(rank): if item[0]!='E1': ranking[int(item[0][1:])-2]=index+1 with open("result/tmp/CIA_Service.json", 'r') as f: result = json.load(f) f.close() C=result['CIA_Service'][2][0] I=result['CIA_Service'][2][1] A=result['CIA_Service'][2][2] service_impact=result['CIA_Service'][2][3] return ranking,C,I,A,service_impact,delete_edges # else: # print('top_n out of range.') # os.system("ps -ef|grep -E 'main.py|expand.py|Weight|cal_metrics.py|Rank|static'|grep -v grep|awk '{print $2}'|xargs kill -9") # # os.system('ps -ef|grep python3') # with open('result/exception.csv', 'a', newline='') as f: # writer = csv.writer(f) # writer.writerow(['top_n out of range.', datetime.now(), list[i][0], list[i][1], list[i][2]]) # f.close() def all_paths_conversion(edges,all_paths): path_id = [] # 每条边用编号代替 path_node = [] # 记录攻击路径上的点 # with open('../input/edgeID.json', 'r') as f: # edge_ID_dict = json.load(f) # f.close() for path in all_paths: temp_id = [] temp_path = ['V0'] for source, target in path: # temp_id.append(edge_ID_dict[source + ',' + target]) temp_path.append(target) for i in range(len(edges)): if edges[i][0]==source and edges[i][1]==target: temp_id.append("E"+str(i+1)) path_id.append(temp_id) path_node.append(temp_path) return path_id, path_node def normalize_edges_impact(edges_impact): min_value = min(edges_impact) # print(min_value) if min_value < 0: for i in range(len(edges_impact)): edges_impact[i] = (edges_impact[i] - min_value) * 0.95 + 0.05 return edges_impact def difference_global_risk(ini_Global_Risk,Global_Risks): difference=[] temp=ini_Global_Risk for risk in Global_Risks: difference.append(temp-risk[1]) temp=risk[1] return difference def min_max(data): data = np.array(data) data_norm = (data - data.min()) / (data.max() - data.min()) return data_norm def get_risk(path_id,probability,cal_edges_impact,delete_edges): ini_Global_Risk = 0 for path in path_id: path_risk = 0 p = 1 for edge in path: p *= probability[int(edge[1:])-1] path_risk += p * cal_edges_impact[int(edge[1:])-1] ini_Global_Risk += path_risk Global_Risks=[["0",ini_Global_Risk]] for delete_edge in delete_edges: for i in range(len(path_id)-1, -1, -1): if delete_edge in path_id[i]: path_id.pop(i) Global_Risk = 0 for path in path_id: path_risk = 0 p = 1 for edge in path: p *= probability[int(edge[1:]) - 1] path_risk += p * cal_edges_impact[int(edge[1:]) - 1] Global_Risk += path_risk Global_Risks.append([delete_edge,Global_Risk]) if Global_Risk==0: break return Global_Risks def probability_save_csv(ranking,exp,nor_exp,probability): rank=[0]*len(ranking) data=[['edge','L','nor_exp','exp']] for index, value in enumerate(ranking): rank[value-1]=['E'+str(index+2)] rank[value-1]+=[probability[index],nor_exp[index],exp[index]] data+=rank with open("result/analysis/risk_data/likehood.csv", "w", newline="", encoding="utf-8-sig") as f: writer = csv.writer(f) writer.writerows(data) # print(data) def impact_save_csv(ranking,edges_impact): with open("result/tmp/CIA_Service.json", 'r') as f: result = json.load(f) f.close() CIA_Service=result['CIA_Service'] data=[['edge','C','I','A','FC','归一化C','归一化I','归一化A','归一化FC','权重归一化C','权重归一化I','权重归一化A','权重归一化FC','impact']] rank=[0]*len(ranking) for index, value in enumerate(ranking): rank[value-1]=['E'+str(index+2)] for i in range(3): rank[value-1]+=[CIA_Service[i][0][index],CIA_Service[i][1][index],CIA_Service[i][2][index],CIA_Service[i][3][index]] rank[value-1]+=[edges_impact[index]] data+=rank with open("result/analysis/risk_data/impact.csv", "w", newline="", encoding="utf-8-sig") as f: writer = csv.writer(f) writer.writerows(data) # print(data) def each_risk(ranking,probability,edges_impact): data=[['edge','impact','L','I*L']] rank=[0]*len(ranking) for index, value in enumerate(ranking): rank[value-1]=['E'+str(index+2)] rank[value-1]+=[edges_impact[index],probability[index],edges_impact[index]*probability[index]] data+=rank with open("result/analysis/risk_data/each_risk.csv", "w", newline="", encoding="utf-8-sig") as f: writer = csv.writer(f) writer.writerows(data) # print(data) def steps_risk(path_id,probability,cal_edges_impact): n=int(input('对前n个全排序,n=')) data=[['steps','初始风险']] with open("result/rank.json", 'r') as f: result = json.load(f) f.close() rank=result['ranking'] temp=[] for i in range(n): temp.append(rank[i][0]) data[0].append('残余风险'+str(i+1)) all_permutations = permutations(temp) for steps in all_permutations: result=get_risk(copy.deepcopy(path_id),probability,cal_edges_impact,steps) risk=[] risk += [value[1] for value in result] data.append([steps]+risk) with open("result/analysis/risk_data/steps_risk.csv", "w", newline="", encoding="utf-8-sig") as f: writer = csv.writer(f) writer.writerows(data) # print(data) def depend_risk(path_id,probability,cal_edges_impact): data=[['path_id','edge','L','I','result']] for index, item in enumerate(path_id): path_risk = 0 p = 1 for edge in item: temp=[index,edge] p *= probability[int(edge[1:])-1] path_risk += p * cal_edges_impact[int(edge[1:])-1] temp+=[p,cal_edges_impact[int(edge[1:])-1],p * cal_edges_impact[int(edge[1:])-1]] data.append(temp) with open("result/analysis/risk_data/depend_risk.csv", "w", newline="", encoding="utf-8-sig") as f: writer = csv.writer(f) writer.writerows(data) # print(data) if __name__ == '__main__': if os.path.exists("result/rank.json"): # print("*********************",flush=True) parser = argparse.ArgumentParser(description="一个简单的命令行参数示例") parser.add_argument('--likehood', action='store_true', help="如果有这个参数就运行probability_save_csv") parser.add_argument('--impact', action='store_true', help="如果有这个参数就运行probability_save_csv") parser.add_argument('--steps', action='store_true', help="如果有这个参数就运行steps_risk") parser.add_argument('--any_com',type=str, help="如果有这个参数就运行any_com") parser.add_argument('topo_file', type=str, help="拓扑文件路径") parser.add_argument('risk_file', type=str, help="风险文件路径") args = parser.parse_args() inputpath=args.topo_file;outputpath=args.risk_file start, end, nodes, edges,exp= get_path(inputpath) all_paths = get_path1(start, end, nodes, edges) ranking,C,I,A,service_impact,delete_edges=get_data() # rank = [3, 2, 1, 8, 14, 7, 6, 12, 5, 9, 10, 13, 4, 11, 15, 18, 16, 19, 17] # exp = [0.8, 2.8, 3.9, 2.8, 0.8, 2.8, 2.8, 3.9, 2.8, 3.9, 2.8, 2.3, 3.9, 3.9, 3.9, 3.9, 3.9, 3.9, 3.9] # print(len(exp)) nor_exp = exp_normalize(copy.deepcopy(exp))#归一化 probability1 = [1] + [(nor_exp[i]/math.log2(ranking[i]+1)) for i in range(len(ranking))] #计算出了每个漏洞的利用概率(11) probability2=[1] + nor_exp#没有(11) probability3 = [1] + [nor_exp[i]/ranking[i] for i in range(len(ranking))]#只除排名n if args.likehood: # print("*********************",flush=True) probability_save_csv(ranking,exp,nor_exp,probability3[1:]) # CIA 影响值貌似应该用不带权重的算??? # C = [0.06105172071668108, 0.06105172071668108, 0.12210344143336216, 0.06105172071668108, 0.06105172071668108, 0.06105172071668108, 0.12210344143336216, 0.12210344143336216, 0.12210344143336216, 0.12210344143336216, 0.12210344143336216, 0.09157758107502162, 0.12210344143336216, 0.06105172071668108, 0.12210344143336216, 0.06105172071668108, 0.12210344143336216, 0.06105172071668108, 0.06105172071668108] # I = [0.16177730695175202, 0.16177730695175202, 0.16177730695175202, 0.16177730695175202, 0.16177730695175202, 0.16177730695175202, 0.16177730695175202, 0.08088865347587601, 0.16177730695175202, 0.08088865347587601, 0.16177730695175202, 0.12133298021381402, 0.16177730695175202, 0.16177730695175202, 0.16177730695175202, 0.16177730695175202, 0.16177730695175202, 0.16177730695175202, 0.16177730695175202] # A = [0.08088865347587601, 0.16177730695175202, 0.16177730695175202, 0.16177730695175202, 0.08088865347587601, 0.16177730695175202, 0.16177730695175202, 0.08088865347587601, 0.16177730695175202, 0.08088865347587601, 0.16177730695175202, 0.08088865347587601, 0.16177730695175202, 0.08088865347587601, 0.16177730695175202, 0.08088865347587601, 0.16177730695175202, 0.08088865347587601, 0.08088865347587601] # service_impact = [0.25563627897950925, 0.25563627897950925, 0.21303023248292435, 0.25563627897950925, 0.25563627897950925, 0.25563627897950925, 0.21303023248292435, 0.21303023248292435, 0.21303023248292435, 0.21303023248292435, 0.1704241859863395, 0.1704241859863395, 0.12781813948975462, 0.12781813948975462, 0.12781813948975462, 0.12781813948975462, 0.12781813948975462, 0.12781813948975462, 0.12781813948975462] edges_impact2 =[0] + [C[i] + I[i] + A[i] + service_impact[i] for i in range(len(C))]#(12) edges_impact1 =[0] + [(C[i] + I[i] + A[i] + service_impact[i])/math.log2(ranking[i]+1) for i in range(len(C))]#(12)除log edges_impact3 =[0] + [(C[i] + I[i] + A[i] + service_impact[i])/ranking[i] for i in range(len(C))]#除n if args.any_com: delete_edges=args.any_com.split(',') path_id, path_node = all_paths_conversion(edges,all_paths) risks1=get_risk(copy.deepcopy(path_id),probability1,edges_impact1,delete_edges) risks2=get_risk(copy.deepcopy(path_id),probability2,edges_impact2,delete_edges) risks3=get_risk(copy.deepcopy(path_id),probability3,edges_impact3,delete_edges) if args.impact: impact_save_csv(ranking,edges_impact3[1:]) each_risk(ranking,probability3[1:],edges_impact3[1:]) depend_risk(copy.deepcopy(path_id),probability3,edges_impact3) if args.steps: steps_risk(copy.deepcopy(path_id),probability3,edges_impact3) # depend_risk(copy.deepcopy(path_id),probability3,edges_impact3) # ini_Global_Risk = 0 # # print(edges_impact[9],edges_impact[14],edges_impact[7]) # # print(probability[3],probability[10],probability[15],probability[8]) # #print('===============================') # for path in path_id: # path_risk = 0 # p = 1 # for edge in path: # p *= probability[int(edge[1:])-1] # path_risk += p * cal_edges_impact[int(edge[1:])-1] # ini_Global_Risk += path_risk # # print('不更新排名') # # print("ini_Global_Risk", ini_Global_Risk) # Global_Risks=[] # std_risks=[] # for delete_edge in delete_edges: # for i in range(len(path_id)-1, -1, -1): # if delete_edge in path_id[i]: # path_id.pop(i) # Global_Risk = 0 # for path in path_id: # path_risk = 0 # p = 1 # for edge in path: # p *= probability[int(edge[1:]) - 1] # path_risk += p * cal_edges_impact[int(edge[1:]) - 1] # Global_Risk += path_risk # #print(Global_Risk) # Global_Risks.append([delete_edge,Global_Risk]) # std_risks.append(Global_Risk) # if Global_Risk==0: # break out={} out.update({'Risk1': risks1}) out.update({'Risk2': risks2}) out.update({'Risk3': risks3}) with open(outputpath, 'w') as f: json.dump(out, f) f.close() # difference=difference_global_risk(ini_Global_Risk,Global_Risks) # std = np.std(min_max(std_risks),ddof=0) # std_difference = np.std(min_max(difference),ddof=0) # print('s_std:',std) # print('s_std_difference:',std_difference) else: print('rank.json do not exists.') os.system("ps -ef|grep -E 'main.py|expand.py|Weight|cal_metrics.py|Rank|static'|grep -v grep|awk '{print $2}'|xargs kill -9") # os.system('ps -ef|grep python3') with open('result/exception.csv', 'a', newline='') as f: writer = csv.writer(f) writer.writerow(['rank.json do not exists.', datetime.now(), list[i][0], list[i][1], list[i][2]]) f.close()
10-18
评论 17
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

esc_ai

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值