Hbase(NoSQL)整合hadoop、hive、kafka、flume、zookeeper组件框架,HBase服务的启动-报错:Error: JAVA_H ,及HMaster启动几秒内进程自动关闭

23 篇文章 0 订阅
5 篇文章 0 订阅

1.HBase安装部署

1
2

2.HBase的解压




 
  
  [cevent@hadoop207 module]$ tar -zxvf hbase-1.3.1-bin.tar.gz  -C /opt/module/
  [cevent@hadoop207 soft]$ cd /opt/module/
  [cevent@hadoop207 module]$ ll
  总用量 44
  drwxrwxr-x. 12 cevent cevent 4096 6月  19 17:50 apache-flume-1.7.0
  drwxrwxr-x.  8 cevent cevent 4096 6月  19 17:53 datas
  drwxr-xr-x. 11 cevent cevent 4096 5月  22 2017 hadoop-2.7.2
  drwxrwxr-x.  3 cevent cevent 4096 6月   5 13:27 hadoop-2.7.2-snappy
  drwxrwxr-x.  7 cevent cevent
  4096 6月  19 23:09
  hbase-1.3.1
  drwxrwxr-x. 10 cevent cevent 4096 5月  22 13:34 hive-1.2.1
  drwxr-xr-x.  8 cevent cevent 4096 4月  11 2015 jdk1.7.0_79
  drwxr-xr-x.  7 cevent cevent 4096 6月  17 18:23 kafka_2.11-0.11.0.0
  drwxrwxr-x.  2 cevent cevent 4096 6月  19 18:22 kafka-monitor
  -rw-rw-r--.  1 cevent cevent   23 6月  16 21:11 xsync.txt
  drwxr-xr-x. 11 cevent cevent 4096 6月  17 11:54 zookeeper-3.4.10
   
  
 


3.HBase的配置文件




 
  
  [cevent@hadoop207 module]$ cd hbase-1.3.1/
  [cevent@hadoop207 hbase-1.3.1]$ ll
  总用量 348
  drwxr-xr-x.  4 cevent cevent   4096 4月   5 2017 bin
  -rw-r--r--.  1 cevent cevent 148959 4月   7 2017 CHANGES.txt
  drwxr-xr-x.  2 cevent cevent   4096 4月   5 2017 conf
  drwxr-xr-x. 12 cevent cevent   4096 4月   7 2017 docs
  drwxr-xr-x.  7 cevent cevent   4096 4月   7 2017 hbase-webapps
  -rw-r--r--.  1 cevent cevent    261 4月   7 2017 LEGAL
  drwxrwxr-x.  3 cevent cevent   4096 6月  19 23:09 lib
  -rw-r--r--.  1 cevent cevent 130696 4月   7 2017 LICENSE.txt
  -rw-r--r--.  1 cevent cevent  43258 4月   7 2017 NOTICE.txt
  -rw-r--r--.  1 cevent cevent   1477 9月  21 2016 README.txt
  [cevent@hadoop207 hbase-1.3.1]$ cd conf/
  [cevent@hadoop207 conf]$ ll
  总用量 40
  -rw-r--r--. 1 cevent cevent 1811 9月  21 2016 hadoop-metrics2-hbase.properties
  -rw-r--r--. 1 cevent cevent 4537 11月  7 2016 hbase-env.cmd
  -rw-r--r--. 1 cevent cevent 7468 11月  7 2016 hbase-env.sh
  -rw-r--r--. 1 cevent cevent 2257 9月  21 2016 hbase-policy.xml
  -rw-r--r--. 1 cevent cevent  934 9月  21 2016 hbase-site.xml
  -rw-r--r--. 1 cevent cevent 4722 4月   5 2017 log4j.properties
  -rw-r--r--. 1 cevent cevent   10 12月  1 2015 regionservers
  [cevent@hadoop207 conf]$ vim hbase-env.sh 
  #
  #/**
  # * Licensed to the Apache Software
  Foundation (ASF) under one
  # * or more contributor license
  agreements.  See the NOTICE file
  # * distributed with this work for
  additional information
  # * regarding copyright ownership.  The ASF licenses this file
  # * to you under the Apache License,
  Version 2.0 (the
  # * "License"); you may not use
  this file except in compliance
  # * with the License.  You may obtain a copy of the License at
  # *
  # *    
  http://www.apache.org/licenses/LICENSE-2.0
  # *
  # * Unless required by applicable law or
  agreed to in writing, software
  # * distributed under the License is
  distributed on an "AS IS" BASIS,
  # * WITHOUT WARRANTIES OR CONDITIONS OF
  ANY KIND, either express or implied.
  # * See the License for the specific
  language governing permissions and
  # * limitations under the License.
  # */
   
  # Set environment variables here.
   
  # This script sets variables multiple
  times over the course of starting an hbase process,
  # so try to keep things idempotent unless
  you want to take an even deeper look
  # into the startup scripts (bin/hbase,
  etc.)
   
  # The java implementation to use.  Java 1.7+ required.
  # export JAVA_HOME=/usr/java/jdk1.6.0/
   
  # Extra Java CLASSPATH elements.  Optional.
  # export HBASE_CLASSPATH=
   
  # The maximum amount of heap to use.
  Default is left to JVM default.
  # export HBASE_HEAPSIZE=1G
   
   
  # Uncomment below if you intend to use
  off heap cache. For example, to allocate 8G of
  # offheap, set the value to
  "8G".
  # export HBASE_OFFHEAPSIZE=1G
   
  # Extra Java runtime options.
  # Below are what we set by default.  May only work with SUN JVM.
  # For more on why as well as other
  possible settings,
  # see http://wiki.apache.org/hadoop/PerformanceTuning
  export
  HBASE_OPTS="-XX:+UseConcMarkSweepGC"
   
  # Configure PermSize. Only needed in
  JDK7. You can safely remove it for JDK8+
  export
  HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS -XX:PermSize=128m
  -XX:MaxPermSize=128m"
  export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS
  -XX:PermSize=128m -XX:MaxPermSize=128m"
   
  # Uncomment one of the below three
  options to enable java garbage collection logging for the server-side
  processes.
   
  # This enables basic gc logging to the
  .out file.
  # export SERVER_GC_OPTS="-verbose:gc
  -XX:+PrintGCDetails -XX:+PrintGCDateStamps"
   
  # This enables basic gc logging to its
  own file.
  # If FILE-PATH is not replaced, the log
  file(.gc) would still be generated in the HBASE_LOG_DIR .
  # export SERVER_GC_OPTS="-verbose:gc
  -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:<FILE-PATH>"
   
  # This enables basic GC logging to its
  own file with automatic log rolling. Only applies to jdk 1.6.0_34+ and
  1.7.0_2+.
  # If FILE-PATH is not replaced, the log
  file(.gc) would still be generated in the HBASE_LOG_DIR .
  # export SERVER_GC_OPTS="-verbose:gc
  -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:<FILE-PATH>
  -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=1
  -XX:GCLogFileSize=512M"
   
  # Uncomment one of the below three options
  to enable java garbage collection logging for the client processes.
   
  # This enables basic gc logging to the
  .out file.
  # export CLIENT_GC_OPTS="-verbose:gc
  -XX:+PrintGCDetails -XX:+PrintGCDateStamps"
   
  # This enables basic gc logging to its
  own file.
  # If FILE-PATH is not replaced, the log
  file(.gc) would still be generated in the HBASE_LOG_DIR .
  # export CLIENT_GC_OPTS="-verbose:gc
  -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:<FILE-PATH>"
   
  # This enables basic GC logging to its
  own file with automatic log rolling. Only applies to jdk 1.6.0_34+ and
  1.7.0_2+.
  # If FILE-PATH is not replaced, the log
  file(.gc) would still be generated in the HBASE_LOG_DIR .
  # export CLIENT_GC_OPTS="-verbose:gc
  -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:<FILE-PATH>
  -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=1
  -XX:GCLogFileSize=512M"
   
  # See the package documentation for
  org.apache.hadoop.hbase.io.hfile for other configurations
  # needed setting up off-heap block
  caching.
   
  # Uncomment and adjust to enable JMX
  exporting
  # See jmxremote.password and
  jmxremote.access in $JRE_HOME/lib/management to configure remote password
  access.
  # More details at:
  http://java.sun.com/javase/6/docs/technotes/guides/management/agent.html
  # NOTE: HBase provides an alternative JMX
  implementation to fix the random ports issue, please see JMX
  # section in HBase Reference Guide for instructions.
   
  # export
  HBASE_JMX_BASE="-Dcom.sun.management.jmxremote.ssl=false
  -Dcom.sun.management.jmxremote.authenticate=false"
  # export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS
  $HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10101"
  # export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS
  $HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10102"
  # export
  HBASE_THRIFT_OPTS="$HBASE_THRIFT_OPTS $HBASE_JMX_BASE
  -Dcom.sun.management.jmxremote.port=10103"
  # export
  HBASE_ZOOKEEPER_OPTS="$HBASE_ZOOKEEPER_OPTS $HBASE_JMX_BASE
  -Dcom.sun.management.jmxremote.port=10104"
  # export
  HBASE_REST_OPTS="$HBASE_REST_OPTS $HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10105"
   
  # File naming hosts on which
  HRegionServers will run. 
  $HBASE_HOME/conf/regionservers by default.
  # export
  HBASE_REGIONSERVERS=${HBASE_HOME}/conf/regionservers
   
  # Uncomment and adjust to keep all the
  Region Server pages mapped to be memory resident
  #HBASE_REGIONSERVER_MLOCK=true
  #HBASE_REGIONSERVER_UID="hbase"
   
  # File naming hosts on which backup
  HMaster will run. 
  $HBASE_HOME/conf/backup-masters by default.
  # export
  HBASE_BACKUP_MASTERS=${HBASE_HOME}/conf/backup-masters
   
  # Extra ssh options.  Empty by default.
  # export HBASE_SSH_OPTS="-o
  ConnectTimeout=1 -o SendEnv=HBASE_CONF_DIR"
   
  # Where log files are stored.  $HBASE_HOME/logs by default.
  # export HBASE_LOG_DIR=${HBASE_HOME}/logs
   
  # Enable remote JDWP debugging of major
  HBase processes. Meant for Core Developers
  # export
  HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS -Xdebug
  -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8070"
  # export
  HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8071"
  # export
  HBASE_THRIFT_OPTS="$HBASE_THRIFT_OPTS -Xdebug
  -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8072"
  # export
  HBASE_ZOOKEEPER_OPTS="$HBASE_ZOOKEEPER_OPTS -Xdebug
  -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8073"
   
  # A string representing this instance of
  hbase. $USER by default.
  # export HBASE_IDENT_STRING=$USER
   
  # The scheduling priority for daemon
  processes.  See 'man nice'.
  # export HBASE_NICENESS=10
   
  # The directory where pid files are
  stored. /tmp by default.
  # export HBASE_PID_DIR=/var/hadoop/pids
   
  # Seconds to sleep between slave
  commands.  Unset by default.  This
  # can be useful in large clusters, where,
  e.g., slave rsyncs can
  # otherwise arrive faster than the master
  can service them.
  # export HBASE_SLAVE_SLEEP=0.1
   
  # Tell HBase whether it should manage
  it's own instance of Zookeeper or not.
  # export HBASE_MANAGES_ZK=true  取消HBase默认开启自己的zookeeper
  export
  HBASE_MANAGES_ZK=false
  # The default log rolling policy is RFA,
  where the log file is rolled as per the size defined for the
  # RFA appender. Please refer to the
  log4j.properties file to see more details on this appender.
  # In case one needs to do log rolling on
  a date change, one should set the environment property
  # HBASE_ROOT_LOGGER to
  "<DESIRED_LOG LEVEL>,DRFA".
  # For example:
  # HBASE_ROOT_LOGGER=INFO,DRFA
  
 


4.hbase-site.xml修改内容




 
  
  [cevent@hadoop207 conf]$ ll
  总用量 40
  -rw-r--r--. 1 cevent cevent 1811 9月  21 2016 hadoop-metrics2-hbase.properties
  -rw-r--r--. 1 cevent cevent 4537 11月  7 2016 hbase-env.cmd
  -rw-r--r--. 1 cevent cevent 7497 6月  19 23:17 hbase-env.sh
  -rw-r--r--. 1 cevent cevent 2257 9月  21 2016 hbase-policy.xml
  -rw-r--r--. 1 cevent cevent  934 9月  21 2016 hbase-site.xml
  -rw-r--r--. 1 cevent cevent 4722 4月   5 2017 log4j.properties
  -rw-r--r--. 1 cevent cevent   10 12月  1 2015 regionservers
  [cevent@hadoop207 conf]$ vim hbase-site.xml 
  <configuration>
  <?xml version="1.0"?>
  <?xml-stylesheet
  type="text/xsl" href="configuration.xsl"?>
  <!--
  /**
   *
   *
  Licensed to the Apache Software Foundation (ASF) under one
   *
  or more contributor license agreements. 
  See the NOTICE file
   *
  distributed with this work for additional information
   *
  regarding copyright ownership.  The ASF
  licenses this file
   *
  to you under the Apache License, Version 2.0 (the
   *
  "License"); you may not use this file except in compliance
   *
  with the License.  You may obtain a
  copy of the License at
   *
   *    
  http://www.apache.org/licenses/LICENSE-2.0
   *
   *
  Unless required by applicable law or agreed to in writing, software
   *
  distributed under the License is distributed on an "AS IS" BASIS,
   *
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   *
  See the License for the specific language governing permissions and
   *
  limitations under the License.
   */
  -->
  <configuration>
          <property>
                  <name>hbase.rootdir</name>
                 
  <value>hdfs://hadoop207.cevent.com:9000/HBase</value>
          </property>
   
          <property>
                 
  <name>hbase.cluster.distributed</name>
                 
  <value>true</value>
          </property>
   
     <!-- 0.98后的新变动,之前版本没有.port,默认端口为60000 -->
          <property>
                 
  <name>hbase.master.port</name>
                 
  <value>16000</value>
          </property>
   
          <property>  
                 
  <name>hbase.zookeeper.quorum</name>
               <value>hadoop207.cevent.com,hadoop208.cevent.com,hadoop209.cevent.com</value>
          </property>
   
          <property>  
                 
  <name>hbase.zookeeper.property.dataDir</name>
              
  <value>/opt/module/zookeeper-3.4.10/data/zkData</value>
          </property>
  </configuration>
   
  
 


5.regionservers配置




 
  
  [cevent@hadoop207 conf]$ vim regionservers 
  hadoop207.cevent.com
  hadoop208.cevent.com
  hadoop209.cevent.com
  
 


6.添加软连接core/hdfs-site



添加软连接core/hdfs-site

[cevent@hadoop207 conf]$ ll

总用量 40

-rw-r--r--. 1 cevent cevent 1811 9月 
21 2016 hadoop-metrics2-hbase.properties

-rw-r--r--. 1 cevent cevent 4537 11月  7
2016 hbase-env.cmd

-rw-r--r--. 1 cevent cevent 7497 6月 
19 23:17 hbase-env.sh

-rw-r--r--. 1 cevent cevent 2257 9月 
21 2016 hbase-policy.xml

-rw-r--r--. 1 cevent cevent 1586 6月 
19 23:23 hbase-site.xml

-rw-r--r--. 1 cevent cevent 4722 4月  
5 2017 log4j.properties

-rw-r--r--. 1 cevent cevent   63 6月  19 23:26 regionservers

[cevent@hadoop207 conf]$ ln -s /opt/module/hadoop-2.7.2/etc/hadoop/core-site.xml 

[cevent@hadoop207 conf]$ ln -s /opt/module/hadoop-2.7.2/etc/hadoop/hdfs-site.xml 

[cevent@hadoop207 conf]$ ll

总用量 40

lrwxrwxrwx. 1 cevent cevent   49 6月  19 23:29 

core-site.xml -> /opt/module/hadoop-2.7.2/etc/hadoop/core-site.xml

-rw-r--r--. 1 cevent cevent 1811 9月 
21 2016 hadoop-metrics2-hbase.properties

-rw-r--r--. 1 cevent cevent 4537 11月  7
2016 hbase-env.cmd

-rw-r--r--. 1 cevent cevent 7497 6月 
19 23:17 hbase-env.sh

-rw-r--r--. 1 cevent cevent 2257 9月 
21 2016 hbase-policy.xml

-rw-r--r--. 1 cevent cevent 1586 6月 
19 23:23 hbase-site.xml

lrwxrwxrwx. 1 cevent cevent   49 6月  19 23:29 

hdfs-site.xml -> /opt/module/hadoop-2.7.2/etc/hadoop/hdfs-site.xml

-rw-r--r--. 1 cevent cevent 4722 4月  
5 2017 log4j.properties

-rw-r--r--. 1 cevent cevent   63 6月  19 23:26 regionservers

7.HBase远程发送到其他集群




 
  
  [cevent@hadoop207 module]$ xsync hbase-1.3.1/
   
  [cevent@hadoop208 module]$ ll
  总用量 24
  drwxr-xr-x. 12 cevent cevent 4096 6月  16 21:35 hadoop-2.7.2
  drwxrwxr-x.  7 cevent cevent
  4096 6月  19 23:31
  hbase-1.3.1
  drwxr-xr-x.  8 cevent cevent 4096 3月  24 09:14 jdk1.7.0_79
  drwxr-xr-x.  7 cevent cevent 4096 6月  18 09:50 kafka_2.11-0.11.0.0
  -rw-rw-r--.  1 cevent cevent   23 6月  16 21:11 xsync.txt
  drwxr-xr-x. 11 cevent cevent 4096 6月  17 13:36 zookeeper-3.4.10
   
  [cevent@hadoop209 ~]$ cd /opt/module/
  [cevent@hadoop209 module]$ ll
  总用量 24
  drwxr-xr-x. 12 cevent cevent 4096 6月  16 21:37 hadoop-2.7.2
  drwxrwxr-x.  7 cevent cevent
  4096 6月  19 23:33
  hbase-1.3.1
  drwxr-xr-x.  8 cevent cevent 4096 3月  24 09:14 jdk1.7.0_79
  drwxr-xr-x.  7 cevent cevent 4096 6月  18 09:51 kafka_2.11-0.11.0.0
  -rw-rw-r--.  1 cevent cevent   23 6月  16 21:11 xsync.txt
  drwxr-xr-x. 11 cevent cevent 4096 6月  17 13:36 zookeeper-3.4.10
  
 


8.HBase服务的启动-报错: Error: JAVA_HOME is not




 
  
  [cevent@hadoop207 hadoop-2.7.2]$ sbin/start-dfs.sh 
  [cevent@hadoop207 hadoop-2.7.2]$ sbin/start-yarn.sh 
   
  [cevent@hadoop207 zookeeper-3.4.10]$ bin/zkServer.sh start
   
  [cevent@hadoop207 zookeeper-3.4.10]$ cd /opt/module/hbase-1.3.1/
  [cevent@hadoop207 hbase-1.3.1]$ ll
  总用量 348
  drwxr-xr-x.  4 cevent cevent   4096 4月   5 2017 bin
  -rw-r--r--.  1 cevent cevent 148959 4月   7 2017 CHANGES.txt
  drwxr-xr-x.  2 cevent cevent   4096 6月  19 23:29 conf
  drwxr-xr-x. 12 cevent cevent   4096 4月   7 2017 docs
  drwxr-xr-x.  7 cevent cevent   4096 4月   7 2017 hbase-webapps
  -rw-r--r--.  1 cevent cevent    261 4月   7 2017 LEGAL
  drwxrwxr-x.  3 cevent cevent   4096 6月  19 23:09 lib
  -rw-r--r--.  1 cevent cevent 130696 4月   7 2017 LICENSE.txt
  -rw-r--r--.  1 cevent cevent  43258 4月   7 2017 NOTICE.txt
  -rw-r--r--.  1 cevent cevent   1477 9月  21 2016 README.txt
  [cevent@hadoop207 hbase-1.3.1]$ bin/hbase-daemon.sh start master 启动hbase控制器
  starting master, logging to /opt/module/hbase-1.3.1/bin/../logs/hbase-cevent-master-hadoop207.cevent.com.out
  [cevent@hadoop207 hbase-1.3.1]$ jps
  4628 HMaster
  4197 NodeManager
  4542 QuorumPeerMain
  3580 NameNode
  3878 SecondaryNameNode
  4773 Jps
  3696 DataNode
  4079 ResourceManager
  [cevent@hadoop207 hbase-1.3.1]$ bin/hbase-daemons.sh start regionserver  启动hbase服务器
  hadoop209.cevent.com:
  +======================================================================+
  hadoop209.cevent.com: |                    Error: JAVA_HOME is not set                       |
  hadoop209.cevent.com:
  +----------------------------------------------------------------------+
  hadoop209.cevent.com: | Please download
  the latest Sun JDK from the Sun Java web site        |
  hadoop209.cevent.com: |     > http://www.oracle.com/technetwork/java/javase/downloads        |
  hadoop209.cevent.com: |                                                                     
  |
  hadoop209.cevent.com: | HBase requires
  Java 1.7 or later.                               
   
  
 


9.修改JAVA_HOME




 
  
  [cevent@hadoop207 hbase-1.3.1]$ vim conf/hbase-env.sh 
  #
  #/**
  # * Licensed to the Apache Software
  Foundation (ASF) under one
  # * or more contributor license
  agreements.  See the NOTICE file
  # * distributed with this work for
  additional information
  # * regarding copyright ownership.  The ASF licenses this file
  # * to you under the Apache License,
  Version 2.0 (the
  # * "License"); you may not use
  this file except in compliance
  # * with the License.  You may obtain a copy of the License at
  # *
  # *    
  http://www.apache.org/licenses/LICENSE-2.0
  # *
  # * Unless required by applicable law or
  agreed to in writing, software
  # * distributed under the License is
  distributed on an "AS IS" BASIS,
  # * WITHOUT WARRANTIES OR CONDITIONS OF
  ANY KIND, either express or implied.
  # * See the License for the specific
  language governing permissions and
  # * limitations under the License.
  # */
   
  # Set environment variables here.
   
  # This script sets variables multiple
  times over the course of starting an hbase process,
  # so try to keep things idempotent unless
  you want to take an even deeper look
  # into the startup scripts (bin/hbase,
  etc.)
   
  # The java implementation to use.  Java 1.7+ required.
  # export JAVA_HOME=/usr/java/jdk1.6.0/
   
  #JAVA_HOME
  export JAVA_HOME=/opt/module/jdk1.7.0_79
  export PATH=$PATH:$JAVA_HOME/bin
   
  # Extra Java CLASSPATH elements.  Optional.
  # export HBASE_CLASSPATH=
   
  [cevent@hadoop207 hbase-1.3.1]$ xsync conf/hbase-env.sh 同步HBase-env配置
  fname=hbase-env.sh
  pdir=/opt/module/hbase-1.3.1/conf
  --------------- hadoop207.cevent.com
  ----------------
  sending incremental file list
   
  sent 35 bytes  received 12 bytes  94.00 bytes/sec
  total size is 7583  speedup is 161.34
  --------------- hadoop208.cevent.com
  ----------------
  sending incremental file list
  hbase-env.sh
   
  sent 904 bytes  received 97 bytes  667.33 bytes/sec
  total size is 7583  speedup is 7.58
  --------------- hadoop209.cevent.com
  ----------------
  sending incremental file list
  hbase-env.sh
   
  sent 904 bytes  received 97 bytes  2002.00 bytes/sec
  total size is 7583  speedup is 7.58
  
 


10.配置etc/profile(3个服务器单独均配)




 
  
  [cevent@hadoop207 hbase-1.3.1]$ pwd
  /opt/module/hbase-1.3.1
  [cevent@hadoop207 hbase-1.3.1]$ sudo vim /etc/profile
  [sudo] password for cevent: 
     
  pathmunge /sbin
     
  pathmunge /usr/sbin
     
  pathmunge /usr/local/sbin
  else
     
  pathmunge /usr/local/sbin after
     
  pathmunge /usr/sbin after
     
  pathmunge /sbin after
  fi
   
  HOSTNAME=`/bin/hostname 2>/dev/null`
  HISTSIZE=1000
  if [ "$HISTCONTROL" =
  "ignorespace" ] ; then
     
  export HISTCONTROL=ignoreboth
  else
     
  export HISTCONTROL=ignoredups
  fi
   
  export PATH USER LOGNAME MAIL HOSTNAME
  HISTSIZE HISTCONTROL
   
  # By default, we want umask to get set.
  This sets it for login shell
  # Current threshold for system reserved
  uid/gids is 200
  # You could check uidgid reservation
  validity in
  # /usr/share/doc/setup-*/uidgid file
  if [ $UID -gt 199 ] && [
  "`id -gn`" = "`id -un`" ]; then
     
  umask 002
  else
     
  umask 022
  fi
   
  for i in /etc/profile.d/*.sh ; do
     
  if [ -r "$i" ]; then
         
  if [ "${-#*i}" != "$-" ]; then
              . "$i"
         
  else
              . "$i" >/dev/null
  2>&1
         
  fi
     
  fi
  done
   
  unset i
  unset -f pathmunge
  #JAVA_HOME
  export JAVA_HOME=/opt/module/jdk1.7.0_79
  export PATH=$PATH:$JAVA_HOME/bin
   
  #HADOOP_HOME
  export
  HADOOP_HOME=/opt/module/hadoop-2.7.2
  export PATH=$PATH:$HADOOP_HOME/bin
  export PATH=$PATH:$HADOOP_HOME/sbin
   
  #HIVE_HOME
  export HIVE_HOME=/opt/module/hive-1.2.1
   
  export PATH=$PATH:$HIVE_HOME/bin
   
  #FLUME_HOME
  export
  FLUME_HOME=/opt/module/apache-flume-1.7.0
  export PATH=$PATH:$FLUME_HOME/bin
   
  #ZOOKEEPER_HOME
  export ZOOKEEPER_HOME=/opt/module/zookeeper-3.4.10
  export PATH=$PATH:$ZOOKEEPER_HOME/bin
   
  #KAFKA_HOME
  export
  KAFKA_HOME=/opt/module/kafka_2.11-0.11.0.0
  export PATH=$PATH:$KAFKA_HOME/bin
   
  #HBASE_HOME
  export HBASE_HOME=/opt/module/hbase-1.3.1
  export PATH=$PATH:$HBASE_HOME/bin
   
  [cevent@hadoop207 hbase-1.3.1]$ source /etc/profile
  
 


11.【解决HMaster开启几秒后进程自动挂掉】

(1)查看hadoop配置的端口




 
  
  [cevent@hadoop207 hadoop-2.7.2]$ cd
  etc/hadoop/
   
  [cevent@hadoop207 hadoop]$ vim
  core-site.xml 
  <?xml version="1.0" encoding="UTF-8"?>
  <?xml-stylesheet
  type="text/xsl" href="configuration.xsl"?>
  <!--
   
  Licensed under the Apache License, Version 2.0 (the
  "License");
   
  you may not use this file except in compliance with the License.
   
  You may obtain a copy of the License at
   
      http://www.apache.org/licenses/LICENSE-2.0
   
   
  Unless required by applicable law or agreed to in writing, software
   
  distributed under the License is distributed on an "AS IS"
  BASIS,
   
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   
  See the License for the specific language governing permissions and
   
  limitations under the License. See accompanying LICENSE file.
  -->
   
  <!-- Put site-specific property
  overrides in this file. -->
   
  <configuration>
         
  <!-- 指定HDFS中NameNode地址 ,设置的hadoop207.cevent.com=hostname
  -->
         
  <property>
                 
  <name>fs.defaultFS</name>
                  <value>hdfs://hadoop207.cevent.com:8020</value>
         
  </property>
   
         
  <!-- 指定tmp数据存储位置 -->
         
  <property>
                  <name>hadoop.tmp.dir</name>
                 
  <value>/opt/module/hadoop-2.7.2/data/tmp</value>
         
  </property>
   
   
  </configuration>
   
  
 


(1)更改HBase配置的端口




 
  
  [cevent@hadoop207 hbase-1.3.1]$ vim
  conf/hbase-site.xml 
  <?xml version="1.0"?>
  <?xml-stylesheet
  type="text/xsl" href="configuration.xsl"?>
  <!--
  /**
   *
   *
  Licensed to the Apache Software Foundation (ASF) under one
   *
  or more contributor license agreements. 
  See the NOTICE file
   *
  distributed with this work for additional information
   *
  regarding copyright ownership.  The ASF
  licenses this file
   *
  to you under the Apache License, Version 2.0 (the
   *
  "License"); you may not use this file except in compliance
   *
  with the License.  You may obtain a
  copy of the License at
   *
   *    
  http://www.apache.org/licenses/LICENSE-2.0
   *
   *
  Unless required by applicable law or agreed to in writing, software
   *
  distributed under the License is distributed on an "AS IS" BASIS,
   *
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   *
  See the License for the specific language governing permissions and
   *
  limitations under the License.
   */
  -->
  <configuration>
         
  <property>
                  <name>hbase.rootdir</name>
                  <value>hdfs://hadoop207.cevent.com:8020/HBase</value>
         
  </property>
   
         
  <property>
                 
  <name>hbase.cluster.distributed</name>
                 
  <value>true</value>
         
  </property>
   
    
  <!-- 0.98后的新变动,之前版本没有.port,默认端口为60000 -->
         
  <property>
                 
  <name>hbase.master.port</name>
                  <value>16000</value>
         
  </property>
   
         
  <property>  
                 
  <name>hbase.zookeeper.quorum</name>
              
  <value>hadoop207.cevent.com,hadoop208.cevent.com,hadoop209.cevent.com</value>
         
  </property>
   
         
  <property>  
                  <name>hbase.zookeeper.property.dataDir</name>
          
      <value>/opt/module/zookeeper-3.4.10/data/zkData</value>
         
  </property>
  </configuration>
  
 



[cevent@hadoop207 hbase-1.3.1]$ jps

10056 Jps

9430 HMaster

4197 NodeManager

8151 ZooKeeperMain

4542 QuorumPeerMain

3580 NameNode

3878 SecondaryNameNode

3696 DataNode

4079 ResourceManager


(3)重启HBase




 
  
  [cevent@hadoop207 hbase-1.3.1]$ bin/stop-hbase.sh 停止hbase服务
  stopping hbase.....
   
  [cevent@hadoop207 hbase-1.3.1]$ bin/start-hbase.sh 启动hbase服务
  starting master, logging to
  /opt/module/hbase-1.3.1/logs/hbase-cevent-master-hadoop207.cevent.com.out
  hadoop208.cevent.com: starting
  regionserver, logging to
  /opt/module/hbase-1.3.1/bin/../logs/hbase-cevent-regionserver-hadoop208.cevent.com.out
  hadoop209.cevent.com: starting
  regionserver, logging to
  /opt/module/hbase-1.3.1/bin/../logs/hbase-cevent-regionserver-hadoop209.cevent.com.out
  hadoop207.cevent.com: starting
  regionserver, logging to
  /opt/module/hbase-1.3.1/bin/../logs/hbase-cevent-regionserver-hadoop207.cevent.com.out
  [cevent@hadoop207 hbase-1.3.1]$ jps
  11394 HRegionServer
  4197 NodeManager
  8151 ZooKeeperMain
  4542 QuorumPeerMain
  3580 NameNode
  3878 SecondaryNameNode
  11248 HMaster
  3696 DataNode
  4079 ResourceManager
  11526 Jps
   
  
 


12.查看HBase页面

访问页面:http://hadoop207.cevent.com:16010/master-status

hbase

13.HBase表基本操作




 
  
  [cevent@hadoop207 hbase-1.3.1]$ bin/hbase shell  启动shell
  SLF4J: Class path contains multiple SLF4J
  bindings.
  SLF4J: Found binding in
  [jar:file:/opt/module/hbase-1.3.1/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
  SLF4J: Found binding in
  [jar:file:/opt/module/hadoop-2.7.2/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
  SLF4J: See
  http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
  SLF4J: Actual binding is of type
  [org.slf4j.impl.Log4jLoggerFactory]
  HBase Shell; enter 'help<RETURN>'
  for list of supported commands.
  Type "exit<RETURN>" to
  leave the HBase Shell
  Version 1.3.1,
  r930b9a55528fe45d8edce7af42fef2d35e77677a, Thu Apr  6 19:36:54 PDT 2017
   
  hbase(main):001:0> create 'c_student','info' 
  创建表
  0 row(s) in 2.8950 seconds
   
  hbase(main):001:0> put 'c_student','2020','info:gender','male'  插入表数据(info=column列族,gender为一列)
  0 row(s) in 0.3740 seconds
   
  hbase(main):002:0> put 'c_student','2020','info:name','cevent'
  0 row(s) in 0.0090 seconds
   
  hbase(main):004:0> put 'c_student','2021','info:name','echo'
  0 row(s) in 0.0140 seconds
   
  hbase(main):005:0> put 'c_student','2021','info:age','26'
  0 row(s) in 0.0100 seconds
   
  hbase(main):006:0> scan 'c_student'  查询表
  ROW                     COLUMN+CELL                                                     
   2020                   column=info:gender, timestamp=1592630739988,
  value=male         
   2020                   column=info:name,
  timestamp=1592630775739, value=cevent        
  
   2021                   column=info:age,
  timestamp=1592630954567, value=26             
  
   2021                   column=info:name,
  timestamp=1592630924801, value=echo          
  
  2 row(s) in 0.0280 seconds
   
  hbase(main):007:0> scan 'c_student',{STARTROW=>'2020',STOPROW=>'2020'} 根据row索引查询
  ROW                     COLUMN+CELL                                                    
  
   2020       
             column=info:gender,
  timestamp=1592630739988, value=male        
  
   2020                   column=info:name,
  timestamp=1592630775739, value=cevent        
  
  1 row(s) in 0.0150 seconds
   
  hbase(main):008:0> put 'c_student','2020','info:age','30' 插入值,直接覆盖
  0 row(s) in 0.0130 seconds
   
  hbase(main):009:0> scan 'c_student'
  ROW                     COLUMN+CELL                                                     
   2020                   column=info:age,
  timestamp=1592631369503, value=30             
  
   2020                   column=info:gender,
  timestamp=1592630739988, value=male        
  
   2020                   column=info:name,
  timestamp=1592630775739, value=cevent        
  
   2021                   column=info:age,
  timestamp=1592630954567, value=26             
  
   2021  
                  column=info:name,
  timestamp=1592630924801, value=echo          
  
  2 row(s) in 0.0140 seconds
   
  hbase(main):010:0> scan 'c_student',{STARTROW=>'2020'}
  ROW                     COLUMN+CELL                                                    
  
   2020                   column=info:age,
  timestamp=1592631369503, value=30             
  
   2020                   column=info:gender,
  timestamp=1592630739988, value=male        
  
   2020                   column=info:name,
  timestamp=1592630775739, value=cevent        
  
   2021                   column=info:age,
  timestamp=1592630954567, value=26             
  
   2021                   column=info:name, timestamp=1592630924801,
  value=echo           
  2 row(s) in 0.0250 seconds
   
  hbase(main):011:0> describe 'c_student' 查询表结构
  Table c_student is ENABLED                                                             
  
  c_student                                                                               
  COLUMN FAMILIES DESCRIPTION                                                            
  
  {NAME => 'info', DATA_BLOCK_ENCODING
  => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE 
  => '0', VERSIONS => '1', COMPRESSION
  => 'NONE', MIN_VERSIONS => '0', TTL => 'FOREVER', K
  EEP_DELETED_CELLS => 'FALSE',
  BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 
  'true'}                                                                                
  
  1 row(s) in 0.0590 seconds
   
  hbase(main):012:0> put 'c_student','2020','info:age','31'
  0 row(s) in 0.0110 seconds
   
  hbase(main):013:0> scan 'c_student',{STARTROW=>'2020',STOPROW=>'2020'}  根据row 索引查询
  ROW                     COLUMN+CELL                                               
       
   2020                   column=info:age,
  timestamp=1592631573367, value=31             
  
   2020                   column=info:gender,
  timestamp=1592630739988, value=male        
  
   2020                   column=info:name,
  timestamp=1592630775739, value=cevent        
  
  1 row(s) in 0.0200 seconds
   
  hbase(main):014:0> get 'c_student','2020' 
  根据表列族查询
  COLUMN                  CELL                                                           
  
   info:age               timestamp=1592631573367,
  value=31                               
   info:gender            timestamp=1592630739988,
  value=male                             
   info:name              timestamp=1592630775739,
  value=cevent                           
  1 row(s) in 0.0260 seconds
   
  hbase(main):015:0> get 'c_student','2020','info:name'  指定get查询结果
  COLUMN                  CELL                                             
                
   info:name              timestamp=1592630775739,
  value=cevent                           
  1 row(s) in 0.0100 seconds
   
  hbase(main):016:0> count 'c_student'  计算表
  2 row(s) in 0.0400 seconds
   
  => 2
  hbase(main):017:0> delete 'c_student','2020','info:age'  删除表数据
  0 row(s) in 0.0300 seconds
   
  hbase(main):018:0> scan 'c_student',{STARTROW=>'2020',STOPROW=>'2020'}
  ROW                     COLUMN+CELL                                                     
   2020                   column=info:gender,
  timestamp=1592630739988, value=male        
  
   2020                   column=info:name,
  timestamp=1592630775739, value=cevent        
  
  1 row(s) in 0.0190 seconds
   
  hbase(main):019:0> deleteall 'c_student','2021'  删除表row
  0 row(s) in 0.0030 seconds
   
  hbase(main):020:0> scan 'c_student'
  ROW                     COLUMN+CELL                                                    
  
   2020                   column=info:gender, timestamp=1592630739988,
  value=male         
   2020                   column=info:name,
  timestamp=1592630775739, value=cevent        
  
  1 row(s) in 0.0100 seconds
   
  hbase(main):021:0> truncate 'c_student' 
  清洗数据
  Truncating 'c_student' table (it may take
  a while):
   -
  Disabling table...
   -
  Truncating table...
  0 row(s) in 3.9510 seconds
   
  hbase(main):022:0> describe
  'c_student'
  Table c_student is ENABLED                                                             
  
  c_student                                                                               
  COLUMN FAMILIES DESCRIPTION                                                  
            
  {NAME => 'info', DATA_BLOCK_ENCODING
  => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE 
  => '0', VERSIONS => '1',
  COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => 'FOREVER', K
  EEP_DELETED_CELLS => 'FALSE',
  BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 
  'true'}                                                                                
  
  1 row(s) in 0.0190 seconds
   
  hbase(main):023:0> disable 'c_student'
  0 row(s) in 2.2580 seconds
   
  hbase(main):024:0> drop 'c_student'
  0 row(s) in 1.2630 seconds
   
  hbase(main):025:0> list
  TABLE                                                                                  
  
  0 row(s) in 0.0180 seconds
   
  => []
  hbase(main):026:0> create
  'c_student','info'
  0 row(s) in 1.2410 seconds
   
  => Hbase::Table - c_student
  hbase(main):027:0> alter
  'c_student',{NAME=>'info',VERSIONS=>3}
  Updating all regions with the new
  schema...
  1/1 regions updated.
  Done.
  0 row(s) in 2.1920 seconds
   
  hbase(main):028:0> list
  TABLE                                                       
                             
  c_student                                                                              
  
  1 row(s) in 0.0080 seconds
   
  => ["c_student"]
  hbase(main):029:0> describe
  'c_student'
  Table c_student is ENABLED                                                              
  c_student                                                                              
  
  COLUMN FAMILIES DESCRIPTION                                                            
  
  {NAME => 'info', DATA_BLOCK_ENCODING
  => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE 
  => '0', VERSIONS => '3',
  COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => 'FOREVER', K
  EEP_DELETED_CELLS => 'FALSE',
  BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 
  'true'}                                                                                
  
  1 row(s) in 0.0350 seconds
   
  hbase(main):030:0> put
  'c_student','2020','info:name','cevent'
  0 row(s) in 0.0180 seconds
   
  hbase(main):031:0> put 'c_student','2020','info:name','echo'
  0 row(s) in 0.0080 seconds
   
  hbase(main):032:0> put
  'c_student','2020','info:name','wuwu'
  0 row(s) in 0.0250 seconds
   
  hbase(main):033:0> get
  'c_student','2020',{COLUMN=>'info:name',VERSIONS=>3}
  COLUMN                  CELL                                                            
   info:name              timestamp=1592632189153,
  value=wuwu                             
   info:name              timestamp=1592632182162,
  value=echo                             
   info:name              timestamp=1592632170588,
  value=cevent                           
  1 row(s) in 0.0080 seconds
   
  hbase(main):034:0> put
  'c_student','2020','info:name','LIULIU'
  0 row(s) in 0.0070 seconds
   
  hbase(main):035:0> get
  'c_student','2020',{COLUMN=>'info:name',VERSIONS=>3}
  COLUMN                  CELL                                                           
  
   info:name              timestamp=1592632266566,
  value=LIULIU                           
   info:name              timestamp=1592632189153,
  value=wuwu                             
   info:name              timestamp=1592632182162,
  value=echo                             
  1 row(s) in 0.0060 seconds
  
 


  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值