Flume收集数据直接入Solr

1 篇文章 0 订阅
1 篇文章 0 订阅

一.背景

在CDH平台上,为了实现NRT(near real-time)近实时搜索,flume收集的数据入solr,solr提供对外查询。在flume收集到数据后(例如测试机器名称dn12.hadoop),需要使用Morphline实现数据的ETL,才能转换成solr的数据格式,所以配置分为三步。

二.solr配置

创建collection或更新

solrctl instancedir --generate /home/data/collectionSignalling
solrctl instancedir --create collectionSignalling /home/data/collectionSignalling
solrctl collection --create collectionSignalling  -s 6 -m 15 -r 2 -c collectionSignalling -a

修改schema.xml配置

 <fields> 
   <field name="_version_" type="long" indexed="true" stored="true"/>
   <field name="_root_" type="string" indexed="true" stored="false"/>   
   <field name="timestamp" type="tdate" indexed="true" stored="true" default="NOW+8HOUR" multiValued="false"/>
   <field name="text" type="text_general" indexed="true" stored="false" multiValued="true"/>    
   <field name="id" type="string" indexed="true" stored="true" required="true" multiValued="false" />    
   <!-- points to the root document of a block of nested documents. Required for nested
        document support, may be removed otherwise
   -->
  <field name="province_code" type="string" indexed="true" stored="true" multiValued="false"/>
  <field name="caller" type="string" indexed="true" stored="true" multiValued="false"/>
  <field name="called" type="string" indexed="true" stored="true" multiValued="false"/>
  <field name="call_status" type="string" indexed="true" stored="true" multiValued="false"/>
  <field name="call_time" type="tdate" indexed="true" stored="true" multiValued="false"/>
  <field name="length_time" type="long" indexed="true" stored="true" multiValued="false"/>

  <dynamicField name="ignored_*" type="ignored" multiValued="true"/>

 </fields>
 <!-- Field to use to determine and enforce document uniqueness. 
      Unless this field is marked with required="false", it will be a required field
   -->
 <uniqueKey>id</uniqueKey>

修改后更新

solrctl instancedir --update collectionSignalling /home/data/collectionSignalling
solrctl collection --reload collectionSignalling

三.flume配置

1.在Flume配置界面配置Flume依赖Solr,即Solr 服务选项选择Solr

2.CM在flume agent中配置文件,其中morphlineFile直接使用文件名称,不用添加路径

tier1.sources=source1  
tier1.channels=channel1  
tier1.sinks=sink1  

tier1.sources.source1.type = avro  
tier1.sources.source1.bind = 0.0.0.0  
tier1.sources.source1.port = 44444  
tier1.sources.source1.channels=channel1  

tier1.channels.channel1.type=memory  
tier1.channels.channel1.capacity=10000  

tier1.sinks.sink1.type = org.apache.flume.sink.solr.morphline.MorphlineSolrSink  
tier1.sinks.sink1.channel = channel1  
tier1.sinks.sink1.morphlineFile = morphlines.conf  
tier1.sinks.sink1.morphlineId = collectionSignalling  

四.Morphlines配置

CDH在flume agent的Morphlines文件选项上添加ETL配置

# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements.  See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership.  The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License.  You may obtain a copy of the License at
#
#  http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied.  See the License for the
# specific language governing permissions and limitations
# under the License.

# Application configuration file in HOCON format (Human-Optimized Config Object Notation). 
# HOCON syntax is defined at http://github.com/typesafehub/config/blob/master/HOCON.md
# and also used by Akka (http://www.akka.io) and Play (http://www.playframework.org/).
# For more examples see http://doc.akka.io/docs/akka/2.1.2/general/configuration.html

# morphline.conf example file
# this is a comment

# Specify server locations in a SOLR_LOCATOR variable; used later in variable substitutions:
SOLR_LOCATOR : {
  # Name of solr collection
  collection : collectionSignalling

  # ZooKeeper ensemble
  zkHost : "nn1.hadoop:2181,nn2.hadoop:2181,dn7.hadoop:2181,dn5.hadoop:2181,dn3.hadoop:2181/solr"

  # Relative or absolute path to a directory containing conf/solrconfig.xml and conf/schema.xml
  # If this path is uncommented it takes precedence over the configuration stored in ZooKeeper.  
  # solrHomeDir : "example/solr/collection1"

  # The maximum number of documents to send to Solr per network batch (throughput knob)
  # batchSize : 100
}

# Specify an array of one or more morphlines, each of which defines an ETL 
# transformation chain. A morphline consists of one or more (potentially 
# nested) commands. A morphline is a way to consume records (e.g. Flume events, 
# HDFS files or blocks), turn them into a stream of records, and pipe the stream 
# of records through a set of easily configurable transformations on it's way to 
# Solr (or a MapReduceIndexerTool RecordWriter that feeds via a Reducer into Solr).
morphlines : [
  {
    # Name used to identify a morphline. E.g. used if there are multiple morphlines in a 
    # morphline config file
    id : collectionSignalling 

    # Import all morphline commands in these java packages and their subpackages.
    # Other commands that may be present on the classpath are not visible to this morphline.
    importCommands : ["org.kitesdk.**", "org.apache.solr.**"]

    commands : [                    
      { 
        #Flume传过来json数据是用二进制流的形式,需要先读取json
        readJson{}
      } 
        #读出来的json字段必须转换成filed才能被solr索引到
      { 
         extractJsonPaths {  
          flatten : false  
          paths : {   
            province_code : /province_code              
            caller : /caller  
            called : /called  
            call_status : /call_status  
            call_time : /call_time 
            length_time : /length_time     
          }  
        } 
      }

      # Consume the output record of the previous command and pipe another record downstream.
      #
      # convert timestamp field to native Solr timestamp format
      # e.g. 2012-09-06T07:14:34Z to 2012-09-06T07:14:34.000Z
      #{
      #  convertTimestamp {
      #    field : call_time
      #    inputFormats : ["yyyyMMdd HH:mm:ss"]
      #    inputTimezone : Asia/Shanghai
      #   outputFormat : "yyyy-MM-dd'T'HH:mm:ss.SSSZ"                                 
      #    outputTimezone : Asia/Shanghai
      #  }
      #}
      #为每一条记录生成一个UUID
      {generateUUID {
       field : id
      }}

      # Consume the output record of the previous command and pipe another record downstream.
      #
      # Command that sanitizes record fields that are unknown to Solr schema.xml by either 
      # deleting them (renameToPrefix is absent or a zero length string), or by moving them to a
      # field prefixed with the given renameToPrefix (e.g. renameToPrefix = "ignored_" to use 
      # typical dynamic Solr fields).
      #
      # Recall that Solr throws an exception on any attempt to load a document that contains a 
      # field that isn't specified in schema.xml.
      {
        sanitizeUnknownSolrFields {
          # Location from which to fetch Solr schema
          solrLocator : ${SOLR_LOCATOR}

          renameToPrefix : "ignored_"
        }
      }  

      # log the record at DEBUG level to SLF4J
      { logDebug { format : "output record: {}", args : ["@{}"] } }    

      # load the record into a SolrServer or MapReduce SolrOutputFormat.
      #将数据导入到solr中
      { 
        loadSolr {
          solrLocator : ${SOLR_LOCATOR}
        }
      }
    ]
  }
]

四.测试

创建file01文件,例如路径:/home/hadoop/test/zhenzhen/file01,添加内容如下

{"province_code":"150000","caller":"18353662886","called":"15335586466","call_status":"1","call_time":"20161221 08:51:40","length_time":"58526"}

进入flume目录下执行如下命令

[hadoop@db1 bin]$ cd /opt/cloudera/parcels/CDH/bin
[hadoop@db1 bin]$ flume-ng avro-client -H dn12.hadoop -p 44444 -F /home/hadoop/test/zhenzhen/file01

查看flume日志是否正常

1.在命令行下查看

[root@dn12 flume-ng]# pwd
/var/log/flume-ng
[root@dn12 flume-ng]# tail -f flume-cmf-flume-AGENT-dn12.hadoop.log

2.在CM上查看,如果没有详细信息,flume agent->日志->Agent 记录阈值,可以调低日志级别为TRACE,在CM界面的日志文件菜单也可以查看日志。

五.参考文章

http://blog.csdn.net/xiao_jun_0820/article/details/40741997
http://www.cnblogs.com/arli/p/6158771.html
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值