win10下搭建storm环境

原文:https://blog.csdn.net/lu_wei_wei/article/details/80843365

 

 

1.下载storm; 

http://mirror.bit.edu.cn/apache/storm/apache-storm-1.2.2/apache-storm-1.2.2.zip 

2.下载zookeeper; 

http://mirror.bit.edu.cn/apache/zookeeper/current/zookeeper-3.4.12.tar.gz 

3.下载python; 

4.启动zookeeper; 

(1)解压zookeeper-3.4.12; 

 

(2)进入zookeeper-3.4.12/conf; 

(3)复制zoo_sample.cfg,重命名为zoo.cfg,不需要修改里面的配置; 

(4)进入zookeeper-3.4.12/bin; 

(5)启动zookeeper 

命令:zkServer.cmd 

 

上面代表已经启动成功! 

5.启动storm相关; 

(1)配置文件 

进入apache-storm-1.2.2\conf storm.yaml 如下

 

# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements.  See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership.  The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License.  You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

########### These MUST be filled in for a storm configuration
# storm.zookeeper.servers:
#     - "server1"
#     - "server2"
storm.zookeeper.servers:
    - "127.0.0.1"
# 
# nimbus.seeds: ["host1", "host2", "host3"]
nimbus.seeds: ["127.0.0.1"]
storm.local.dir: "D:\\storm-local\\data3"
supervisor.slots.ports:
    - 6700
    - 6701
    - 6702
    - 6703
# 
# 
# ##### These may optionally be filled in:
#    
## List of custom serializations
# topology.kryo.register:
#     - org.mycompany.MyType
#     - org.mycompany.MyType2: org.mycompany.MyType2Serializer
#
## List of custom kryo decorators
# topology.kryo.decorators:
#     - org.mycompany.MyDecorator
#
## Locations of the drpc servers
# drpc.servers:
#     - "server1"
#     - "server2"

## Metrics Consumers
## max.retain.metric.tuples
## - task queue will be unbounded when max.retain.metric.tuples is equal or less than 0.
## whitelist / blacklist
## - when none of configuration for metric filter are specified, it'll be treated as 'pass all'.
## - you need to specify either whitelist or blacklist, or none of them. You can't specify both of them.
## - you can specify multiple whitelist / blacklist with regular expression
## expandMapType: expand metric with map type as value to multiple metrics
## - set to true when you would like to apply filter to expanded metrics
## - default value is false which is backward compatible value
## metricNameSeparator: separator between origin metric name and key of entry from map
## - only effective when expandMapType is set to true
# topology.metrics.consumer.register:
#   - class: "org.apache.storm.metric.LoggingMetricsConsumer"
#     max.retain.metric.tuples: 100
#     parallelism.hint: 1
#   - class: "org.mycompany.MyMetricsConsumer"
#     max.retain.metric.tuples: 100
#     whitelist:
#       - "execute.*"
#       - "^__complete-latency$"
#     parallelism.hint: 1
#     argument:
#       - endpoint: "metrics-collector.mycompany.org"
#     expandMapType: true
#     metricNameSeparator: "."

## Cluster Metrics Consumers
# storm.cluster.metrics.consumer.register:
#   - class: "org.apache.storm.metric.LoggingClusterMetricsConsumer"
#   - class: "org.mycompany.MyMetricsConsumer"
#     argument:
#       - endpoint: "metrics-collector.mycompany.org"
#
# storm.cluster.metrics.consumer.publish.interval.secs: 60

# Event Logger
# topology.event.logger.register:
#   - class: "org.apache.storm.metric.FileBasedEventLogger"
#   - class: "org.mycompany.MyEventLogger"
#     arguments:
#       endpoint: "event-logger.mycompany.org"

# Metrics v2 configuration (optional)
#storm.metrics.reporters:
#  # Graphite Reporter
#  - class: "org.apache.storm.metrics2.reporters.GraphiteStormReporter"
#    daemons:
#        - "supervisor"
#        - "nimbus"
#        - "worker"
#    report.period: 60
#    report.period.units: "SECONDS"
#    graphite.host: "localhost"
#    graphite.port: 2003
#
#  # Console Reporter
#  - class: "org.apache.storm.metrics2.reporters.ConsoleStormReporter"
#    daemons:
#        - "worker"
#    report.period: 10
#    report.period.units: "SECONDS"
#    filter:
#        class: "org.apache.storm.metrics2.filters.RegexFilter"
#        expression: ".*my_component.*emitted.*"

 

启动storm:分别启动Nimbus、Supervisor、Storm UI Daemons 
进如apache-storm-1.2.2\bin 
启动 Nimbus:

storm.py nimbus

这里写图片描述

启动 Supervisor:

storm.py supervisor

 

这里写图片描述

启动 Storm UI

storm.py ui

这里写图片描述
启动完毕,输入http://127.0.0.1:8080/访问 
这里写图片描述

 

1.下载storm; http://mirror.bit.edu.cn/apache/storm/apache-storm-1.2.2/apache-storm-1.2.2.zip 2.下载zookeeper; http://mirror.bit.edu.cn/apache/zookeeper/current/zookeeper-3.4.12.tar.gz 3.下载python; 4.启动zookeeper; (1)解压zookeeper-3.4.12; 
(2)进入zookeeper-3.4.12/conf; (3)复制zoo_sample.cfg,重命名为zoo.cfg,不需要修改里面的配置; (4)进入zookeeper-3.4.12/bin; (5)启动zookeeper 命令:zkServer.cmd 
上面代表已经启动成功! 5.启动storm相关; (1)配置文件 进入apache-storm-1.2.2\conf storm.yaml 如下
# Licensed to the Apache Software Foundation (ASF) under one# or more contributor license agreements.  See the NOTICE file# distributed with this work for additional information# regarding copyright ownership.  The ASF licenses this file# to you under the Apache License, Version 2.0 (the# "License"); you may not use this file except in compliance# with the License.  You may obtain a copy of the License at## http://www.apache.org/licenses/LICENSE-2.0## Unless required by applicable law or agreed to in writing, software# distributed under the License is distributed on an "AS IS" BASIS,# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.# See the License for the specific language governing permissions and# limitations under the License.
########### These MUST be filled in for a storm configuration# storm.zookeeper.servers:#     - "server1"#     - "server2"storm.zookeeper.servers:    - "127.0.0.1"# # nimbus.seeds: ["host1", "host2", "host3"]nimbus.seeds: ["127.0.0.1"]storm.local.dir: "D:\\storm-local\\data3"supervisor.slots.ports:    - 6700    - 6701    - 6702    - 6703# # # ##### These may optionally be filled in:#    ## List of custom serializations# topology.kryo.register:#     - org.mycompany.MyType#     - org.mycompany.MyType2: org.mycompany.MyType2Serializer### List of custom kryo decorators# topology.kryo.decorators:#     - org.mycompany.MyDecorator### Locations of the drpc servers# drpc.servers:#     - "server1"#     - "server2"
## Metrics Consumers## max.retain.metric.tuples## - task queue will be unbounded when max.retain.metric.tuples is equal or less than 0.## whitelist / blacklist## - when none of configuration for metric filter are specified, it'll be treated as 'pass all'.## - you need to specify either whitelist or blacklist, or none of them. You can't specify both of them.## - you can specify multiple whitelist / blacklist with regular expression## expandMapType: expand metric with map type as value to multiple metrics## - set to true when you would like to apply filter to expanded metrics## - default value is false which is backward compatible value## metricNameSeparator: separator between origin metric name and key of entry from map## - only effective when expandMapType is set to true# topology.metrics.consumer.register:#   - class: "org.apache.storm.metric.LoggingMetricsConsumer"#     max.retain.metric.tuples: 100#     parallelism.hint: 1#   - class: "org.mycompany.MyMetricsConsumer"#     max.retain.metric.tuples: 100#     whitelist:#       - "execute.*"#       - "^__complete-latency$"#     parallelism.hint: 1#     argument:#       - endpoint: "metrics-collector.mycompany.org"#     expandMapType: true#     metricNameSeparator: "."
## Cluster Metrics Consumers# storm.cluster.metrics.consumer.register:#   - class: "org.apache.storm.metric.LoggingClusterMetricsConsumer"#   - class: "org.mycompany.MyMetricsConsumer"#     argument:#       - endpoint: "metrics-collector.mycompany.org"## storm.cluster.metrics.consumer.publish.interval.secs: 60
# Event Logger# topology.event.logger.register:#   - class: "org.apache.storm.metric.FileBasedEventLogger"#   - class: "org.mycompany.MyEventLogger"#     arguments:#       endpoint: "event-logger.mycompany.org"
# Metrics v2 configuration (optional)#storm.metrics.reporters:#  # Graphite Reporter#  - class: "org.apache.storm.metrics2.reporters.GraphiteStormReporter"#    daemons:#        - "supervisor"#        - "nimbus"#        - "worker"#    report.period: 60#    report.period.units: "SECONDS"#    graphite.host: "localhost"#    graphite.port: 2003##  # Console Reporter#  - class: "org.apache.storm.metrics2.reporters.ConsoleStormReporter"#    daemons:#        - "worker"#    report.period: 10#    report.period.units: "SECONDS"#    filter:#        class: "org.apache.storm.metrics2.filters.RegexFilter"#        expression: ".*my_component.*emitted.*"123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115启动storm:分别启动Nimbus、Supervisor、Storm UI Daemons 进如apache-storm-1.2.2\bin 启动 Nimbus:
storm.py nimbus1
启动 Supervisor:
storm.py supervisor1
启动 Storm UI
storm.py ui1
启动完毕,输入http://127.0.0.1:8080/访问 
--------------------- 作者:-奋斗的小鹿- 来源:CSDN 原文:https://blog.csdn.net/lu_wei_wei/article/details/80843365 版权声明:本文为博主原创文章,转载请附上博文链接!

转载于:https://www.cnblogs.com/shihaiming/p/10716908.html

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值