Linux下搭建Eclipse+Maven的Hbase开发环境

    由于Hive不支持数据的更新,所以不得不考虑使用Hbase来实现每日更新数据的需求。环境搭建过程中遇到了许多问题,所以写下此篇博文,给准备使用Hbase的小伙伴做个参考,也给自己做个备忘。

  • 开发环境以及版本介绍,Hbase采用集群方式运行,不适用Hbase自带的zookeeper,单独搭建zookeeper环境,但这些应该对开发没有什么影响。

zookeeper-3.4.6集群:

hostip
spark1192.168.4.31
spark2192.168.4.32
spark3192.168.4.33
spark4192.168.4.34
spark5192.168.4.35

hbase-0.98.13集群:

hostip
hadoop1192.168.4.21
hadoop2192.168.4.22
hadoop3192.168.4.23
hadoop4192.168.4.24
hadoop5192.168.4.25

hadoop-2.4.1集群:

hostipnode
hadoop1192.168.4.21datanode
hadoop2192.168.4.22datanode
hadoop3192.168.4.23datanode
hadoop4192.168.4.24datanode
hadoop5192.168.4.25datanode
hadoop6192.168.5.19namenode
hadoop7192.168.4.26datanode

  • Hbase配置文件hbase-site.xml

<configuration>
	<property>
		<name>hbase.rootdir</name>
		<value>hdfs://hadoop6:9000/hbase</value>
	</property>
	<property>
		<name>hbase.cluster.distributed</name>
		<value>true</value>
	</property>
	<property>
		<name>hbase.zookeeper.quorum</name>
		<value>spark1,spark2,spark3,spark4,spark5</value>
	</property>
	<property>
		<name>hbase.zookeeper.property.clientPort</name>
		<value>2181</value>
	</property>
</configuration>

  • 开发机hosts配置,务必把Hbase和Zookeeper集群的所有机器都配置进去

192.168.4.21  hadoop1
192.168.4.22  hadoop2
192.168.4.23  hadoop3
192.168.4.24  hadoop4
192.168.4.25  hadoop5
192.168.5.19  hadoop6
192.168.4.26  hadoop7

192.168.4.31  spark1
192.168.4.32  spark2
192.168.4.33  spark3
192.168.4.34  spark4
192.168.4.35  spark5

  • Eclipse开发环境配置,建立标准的Java Maven项目

目录结构

HbaseStudy-0.98.13/
├── pom.xml
├── src
    ├── main
    │   ├── java
    │   └── resources
    │       ├── hbase-site.xml
    │       └── log4j.properties
    └── test
        └── java
            └── com
                └── hua
                    └── hbase
                        └── test
                            └── TestHbase.java

pom.xml

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
	<modelVersion>4.0.0</modelVersion>
	<groupId>HbaseStudy-0.98.13</groupId>
	<artifactId>HbaseStudy-0.98.13</artifactId>
	<version>0.0.1-SNAPSHOT</version>
	<dependencies>
		<dependency>
			<groupId>org.apache.hbase</groupId>
			<artifactId>hbase-client</artifactId>
			<version>0.98.13-hadoop2</version>
		</dependency>
		<dependency>
			<groupId>org.apache.hbase</groupId>
			<artifactId>hbase-examples</artifactId>
			<version>0.98.13-hadoop2</version>
		</dependency>
		<dependency>
			<groupId>org.apache.hadoop</groupId>
			<artifactId>hadoop-hdfs</artifactId>
			<version>2.2.0</version>
		</dependency>
		<dependency>
			<groupId>log4j</groupId>
			<artifactId>log4j</artifactId>
			<version>1.2.17</version>
		</dependency>
		<dependency>
			<groupId>junit</groupId>
			<artifactId>junit</artifactId>
			<version>4.12</version>
			<scope>test</scope>
		</dependency>
	</dependencies>
	<build>
		<plugins>
			<plugin>
				<artifactId>maven-compiler-plugin</artifactId>
				<version>3.3</version>
				<configuration>
					<source>1.7</source>
					<target>1.7</target>
				</configuration>
			</plugin>
		</plugins>
	</build>
</project>

在src/main/resources下面新建hbase-site.xml配置文件,内容拷贝habse配置文件内容,zookeeper端口配置默认可以不加

<?xml version="1.0" encoding="UTF-8"?>

<configuration>
	<property>
		<name>hbase.rootdir</name>
		<value>hdfs://hadoop6:9000/hbase</value>
	</property>
	<property>
		<name>hbase.cluster.distributed</name>
		<value>true</value>
	</property>
	<property>
		<name>hbase.zookeeper.quorum</name>
		<value>spark1,spark2,spark3,spark4,spark5</value>
	</property>
</configuration>
测试程序TestHbase.java
package com.hua.hbase.test;

import java.io.IOException;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.HColumnDescriptor;
import org.apache.hadoop.hbase.HTableDescriptor;
import org.apache.hadoop.hbase.TableName;
import org.apache.hadoop.hbase.client.HBaseAdmin;
import org.apache.hadoop.hbase.util.Bytes;
import org.apache.log4j.Logger;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;

public class TestHbase {
	private final Logger logger = Logger.getLogger(TestHbase.class);
	
	private Configuration conf = null;
	private HBaseAdmin admin = null;

	@Before
	public void before() throws IOException {
		conf = HBaseConfiguration.create();
		admin = new HBaseAdmin(conf);
	}
	
	@After
	public void after() throws IOException {
		admin.close();
	}

	@Test
	public void testCreate() throws Exception {
		TableName tableName = TableName.valueOf("user1");
		
		HTableDescriptor tableDesc = new HTableDescriptor(tableName);
		
		tableDesc.addFamily(new HColumnDescriptor(Bytes.toBytes("basic")));
		tableDesc.addFamily(new HColumnDescriptor(Bytes.toBytes("advance")));
		
		if(admin.tableExists(tableName)) {
			admin.disableTable(tableName);
			admin.deleteTable(tableName);
		}
		
		admin.createTable(tableDesc);
		
		logger.info("Create table success!");
	}
}
查看user1表是否成功建立


执行日志

[org.apache.hadoop.metrics2.lib.MutableMetricsFactory]field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with annotation @org.apache.hadoop.metrics2.annotation.Metric(value=[Rate of successful kerberos logins and latency (milliseconds)], about=, valueName=Time, type=DEFAULT, always=false, sampleName=Ops)
[org.apache.hadoop.metrics2.lib.MutableMetricsFactory]field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with annotation @org.apache.hadoop.metrics2.annotation.Metric(value=[Rate of failed kerberos logins and latency (milliseconds)], about=, valueName=Time, type=DEFAULT, always=false, sampleName=Ops)
[org.apache.hadoop.metrics2.impl.MetricsSystemImpl]UgiMetrics, User and group related metrics
[org.apache.hadoop.security.authentication.util.KerberosName]Kerberos krb5 configuration not found, setting default realm to empty
[org.apache.hadoop.security.Groups] Creating new Groups object
[org.apache.hadoop.util.NativeCodeLoader]Trying to load the custom-built native-hadoop library...
[org.apache.hadoop.util.NativeCodeLoader]Failed to load native-hadoop with error: java.lang.UnsatisfiedLinkError: no hadoop in java.library.path
[org.apache.hadoop.util.NativeCodeLoader]java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
[org.apache.hadoop.util.NativeCodeLoader]Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback]Falling back to shell based
[org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback]Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping
[org.apache.hadoop.security.Groups]Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback; cacheTimeout=300000
[org.apache.hadoop.security.UserGroupInformation]hadoop login
[org.apache.hadoop.security.UserGroupInformation]hadoop login commit
[org.apache.hadoop.security.UserGroupInformation]using local user:UnixPrincipal: hadoop
[org.apache.hadoop.security.UserGroupInformation]UGI loginUser:hadoop (auth:SIMPLE)
[org.apache.hadoop.util.Shell]setsid exited with exit code 0
[org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper]Process identifier=hconnection-0x18e4d5ba connecting to ZooKeeper ensemble=spark4:2181,spark3:2181,spark2:2181,spark1:2181,spark5:2181
[org.apache.zookeeper.ZooKeeper]Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
[org.apache.zookeeper.ZooKeeper]Client environment:host.name=hadoop-nenusoul
[org.apache.zookeeper.ZooKeeper]Client environment:java.version=1.7.0_75
[org.apache.zookeeper.ZooKeeper]Client environment:java.vendor=Oracle Corporation
[org.apache.zookeeper.ZooKeeper]Client environment:java.home=/home/hadoop/work/DevelopTools/jdk1.7.0_75/jre
[org.apache.zookeeper.ZooKeeper]Client environment:java.class.path=/home/hadoop/workspace/HbaseStudy-0.98.13/target/test-classes:/home/hadoop/workspace/HbaseStudy-0.98.13/target/classes:/home/hadoop/.m2/repository/org/apache/hbase/hbase-client/0.98.13-hadoop2/hbase-client-0.98.13-hadoop2.jar:/home/hadoop/.m2/repository/org/apache/hbase/hbase-annotations/0.98.13-hadoop2/hbase-annotations-0.98.13-hadoop2.jar:/home/hadoop/work/DevelopTools/jdk1.7.0_75/lib/tools.jar:/home/hadoop/.m2/repository/org/apache/hbase/hbase-common/0.98.13-hadoop2/hbase-common-0.98.13-hadoop2.jar:/home/hadoop/.m2/repository/commons-collections/commons-collections/3.2.1/commons-collections-3.2.1.jar:/home/hadoop/.m2/repository/org/apache/hbase/hbase-protocol/0.98.13-hadoop2/hbase-protocol-0.98.13-hadoop2.jar:/home/hadoop/.m2/repository/commons-codec/commons-codec/1.7/commons-codec-1.7.jar:/home/hadoop/.m2/repository/commons-io/commons-io/2.4/commons-io-2.4.jar:/home/hadoop/.m2/repository/commons-lang/commons-lang/2.6/commons-lang-2.6.jar:/home/hadoop/.m2/repository/commons-logging/commons-logging/1.1.1/commons-logging-1.1.1.jar:/home/hadoop/.m2/repository/com/google/guava/guava/12.0.1/guava-12.0.1.jar:/home/hadoop/.m2/repository/com/google/code/findbugs/jsr305/1.3.9/jsr305-1.3.9.jar:/home/hadoop/.m2/repository/com/google/protobuf/protobuf-java/2.5.0/protobuf-java-2.5.0.jar:/home/hadoop/.m2/repository/io/netty/netty/3.6.6.Final/netty-3.6.6.Final.jar:/home/hadoop/.m2/repository/org/apache/zookeeper/zookeeper/3.4.6/zookeeper-3.4.6.jar:/home/hadoop/.m2/repository/org/slf4j/slf4j-api/1.6.1/slf4j-api-1.6.1.jar:/home/hadoop/.m2/repository/org/slf4j/slf4j-log4j12/1.6.1/slf4j-log4j12-1.6.1.jar:/home/hadoop/.m2/repository/org/cloudera/htrace/htrace-core/2.04/htrace-core-2.04.jar:/home/hadoop/.m2/repository/org/codehaus/jackson/jackson-mapper-asl/1.8.8/jackson-mapper-asl-1.8.8.jar:/home/hadoop/.m2/repository/org/jruby/jcodings/jcodings/1.0.8/jcodings-1.0.8.jar:/home/hadoop/.m2/repository/org/jruby/joni/joni/2.1.2/joni-2.1.2.jar:/home/hadoop/.m2/repository/org/apache/hadoop/hadoop-auth/2.2.0/hadoop-auth-2.2.0.jar:/home/hadoop/.m2/repository/org/apache/hadoop/hadoop-common/2.2.0/hadoop-common-2.2.0.jar:/home/hadoop/.m2/repository/org/apache/hadoop/hadoop-annotations/2.2.0/hadoop-annotations-2.2.0.jar:/home/hadoop/.m2/repository/org/apache/commons/commons-math/2.1/commons-math-2.1.jar:/home/hadoop/.m2/repository/commons-httpclient/commons-httpclient/3.1/commons-httpclient-3.1.jar:/home/hadoop/.m2/repository/commons-net/commons-net/3.1/commons-net-3.1.jar:/home/hadoop/.m2/repository/com/sun/jersey/jersey-json/1.9/jersey-json-1.9.jar:/home/hadoop/.m2/repository/org/codehaus/jettison/jettison/1.1/jettison-1.1.jar:/home/hadoop/.m2/repository/stax/stax-api/1.0.1/stax-api-1.0.1.jar:/home/hadoop/.m2/repository/com/sun/xml/bind/jaxb-impl/2.2.3-1/jaxb-impl-2.2.3-1.jar:/home/hadoop/.m2/repository/javax/xml/bind/jaxb-api/2.2.2/jaxb-api-2.2.2.jar:/home/hadoop/.m2/repository/javax/activation/activation/1.1/activation-1.1.jar:/home/hadoop/.m2/repository/org/codehaus/jackson/jackson-xc/1.8.3/jackson-xc-1.8.3.jar:/home/hadoop/.m2/repository/commons-el/commons-el/1.0/commons-el-1.0.jar:/home/hadoop/.m2/repository/net/java/dev/jets3t/jets3t/0.6.1/jets3t-0.6.1.jar:/home/hadoop/.m2/repository/commons-configuration/commons-configuration/1.6/commons-configuration-1.6.jar:/home/hadoop/.m2/repository/commons-digester/commons-digester/1.8/commons-digester-1.8.jar:/home/hadoop/.m2/repository/commons-beanutils/commons-beanutils/1.7.0/commons-beanutils-1.7.0.jar:/home/hadoop/.m2/repository/commons-beanutils/commons-beanutils-core/1.8.0/commons-beanutils-core-1.8.0.jar:/home/hadoop/.m2/repository/org/apache/avro/avro/1.7.4/avro-1.7.4.jar:/home/hadoop/.m2/repository/com/thoughtworks/paranamer/paranamer/2.3/paranamer-2.3.jar:/home/hadoop/.m2/repository/org/xerial/snappy/snappy-java/1.0.4.1/snappy-java-1.0.4.1.jar:/home/hadoop/.m2/repository/com/jcraft/jsch/0.1.42/jsch-0.1.42.jar:/home/hadoop/.m2/repository/org/apache/commons/commons-compress/1.4.1/commons-compress-1.4.1.jar:/home/hadoop/.m2/repository/org/tukaani/xz/1.0/xz-1.0.jar:/home/hadoop/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-core/2.2.0/hadoop-mapreduce-client-core-2.2.0.jar:/home/hadoop/.m2/repository/org/apache/hadoop/hadoop-yarn-common/2.2.0/hadoop-yarn-common-2.2.0.jar:/home/hadoop/.m2/repository/org/apache/hadoop/hadoop-yarn-api/2.2.0/hadoop-yarn-api-2.2.0.jar:/home/hadoop/.m2/repository/com/google/inject/guice/3.0/guice-3.0.jar:/home/hadoop/.m2/repository/javax/inject/javax.inject/1/javax.inject-1.jar:/home/hadoop/.m2/repository/aopalliance/aopalliance/1.0/aopalliance-1.0.jar:/home/hadoop/.m2/repository/com/sun/jersey/contribs/jersey-guice/1.9/jersey-guice-1.9.jar:/home/hadoop/.m2/repository/com/google/inject/extensions/guice-servlet/3.0/guice-servlet-3.0.jar:/home/hadoop/.m2/repository/com/github/stephenc/findbugs/findbugs-annotations/1.3.9-1/findbugs-annotations-1.3.9-1.jar:/home/hadoop/.m2/repository/org/apache/hbase/hbase-examples/0.98.13-hadoop2/hbase-examples-0.98.13-hadoop2.jar:/home/hadoop/.m2/repository/org/apache/hbase/hbase-server/0.98.13-hadoop2/hbase-server-0.98.13-hadoop2.jar:/home/hadoop/.m2/repository/org/apache/hbase/hbase-prefix-tree/0.98.13-hadoop2/hbase-prefix-tree-0.98.13-hadoop2.jar:/home/hadoop/.m2/repository/org/apache/hbase/hbase-common/0.98.13-hadoop2/hbase-common-0.98.13-hadoop2-tests.jar:/home/hadoop/.m2/repository/org/apache/hbase/hbase-hadoop-compat/0.98.13-hadoop2/hbase-hadoop-compat-0.98.13-hadoop2.jar:/home/hadoop/.m2/repository/org/apache/hbase/hbase-hadoop2-compat/0.98.13-hadoop2/hbase-hadoop2-compat-0.98.13-hadoop2.jar:/home/hadoop/.m2/repository/com/yammer/metrics/metrics-core/2.2.0/metrics-core-2.2.0.jar:/home/hadoop/.m2/repository/com/github/stephenc/high-scale-lib/high-scale-lib/1.1.1/high-scale-lib-1.1.1.jar:/home/hadoop/.m2/repository/org/mortbay/jetty/jetty-sslengine/6.1.26/jetty-sslengine-6.1.26.jar:/home/hadoop/.m2/repository/org/mortbay/jetty/jsp-2.1/6.1.14/jsp-2.1-6.1.14.jar:/home/hadoop/.m2/repository/org/mortbay/jetty/jsp-api-2.1/6.1.14/jsp-api-2.1-6.1.14.jar:/home/hadoop/.m2/repository/org/mortbay/jetty/servlet-api-2.5/6.1.14/servlet-api-2.5-6.1.14.jar:/home/hadoop/.m2/repository/org/codehaus/jackson/jackson-jaxrs/1.8.8/jackson-jaxrs-1.8.8.jar:/home/hadoop/.m2/repository/tomcat/jasper-compiler/5.5.23/jasper-compiler-5.5.23.jar:/home/hadoop/.m2/repository/org/jamon/jamon-runtime/2.3.1/jamon-runtime-2.3.1.jar:/home/hadoop/.m2/repository/org/apache/hadoop/hadoop-client/2.2.0/hadoop-client-2.2.0.jar:/home/hadoop/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-app/2.2.0/hadoop-mapreduce-client-app-2.2.0.jar:/home/hadoop/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-common/2.2.0/hadoop-mapreduce-client-common-2.2.0.jar:/home/hadoop/.m2/repository/org/apache/hadoop/hadoop-yarn-client/2.2.0/hadoop-yarn-client-2.2.0.jar:/home/hadoop/.m2/repository/com/sun/jersey/jersey-test-framework/jersey-test-framework-grizzly2/1.9/jersey-test-framework-grizzly2-1.9.jar:/home/hadoop/.m2/repository/com/sun/jersey/jersey-test-framework/jersey-test-framework-core/1.9/jersey-test-framework-core-1.9.jar:/home/hadoop/.m2/repository/javax/servlet/javax.servlet-api/3.0.1/javax.servlet-api-3.0.1.jar:/home/hadoop/.m2/repository/com/sun/jersey/jersey-client/1.9/jersey-client-1.9.jar:/home/hadoop/.m2/repository/com/sun/jersey/jersey-grizzly2/1.9/jersey-grizzly2-1.9.jar:/home/hadoop/.m2/repository/org/glassfish/grizzly/grizzly-http/2.1.2/grizzly-http-2.1.2.jar:/home/hadoop/.m2/repository/org/glassfish/grizzly/grizzly-framework/2.1.2/grizzly-framework-2.1.2.jar:/home/hadoop/.m2/repository/org/glassfish/gmbal/gmbal-api-only/3.0.0-b023/gmbal-api-only-3.0.0-b023.jar:/home/hadoop/.m2/repository/org/glassfish/external/management-api/3.0.0-b012/management-api-3.0.0-b012.jar:/home/hadoop/.m2/repository/org/glassfish/grizzly/grizzly-http-server/2.1.2/grizzly-http-server-2.1.2.jar:/home/hadoop/.m2/repository/org/glassfish/grizzly/grizzly-rcm/2.1.2/grizzly-rcm-2.1.2.jar:/home/hadoop/.m2/repository/org/glassfish/grizzly/grizzly-http-servlet/2.1.2/grizzly-http-servlet-2.1.2.jar:/home/hadoop/.m2/repository/org/glassfish/javax.servlet/3.1/javax.servlet-3.1.jar:/home/hadoop/.m2/repository/org/apache/hadoop/hadoop-yarn-server-common/2.2.0/hadoop-yarn-server-common-2.2.0.jar:/home/hadoop/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-shuffle/2.2.0/hadoop-mapreduce-client-shuffle-2.2.0.jar:/home/hadoop/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-jobclient/2.2.0/hadoop-mapreduce-client-jobclient-2.2.0.jar:/home/hadoop/.m2/repository/org/apache/hbase/hbase-thrift/0.98.13-hadoop2/hbase-thrift-0.98.13-hadoop2.jar:/home/hadoop/.m2/repository/org/apache/thrift/libthrift/0.9.0/libthrift-0.9.0.jar:/home/hadoop/.m2/repository/org/apache/httpcomponents/httpclient/4.1.3/httpclient-4.1.3.jar:/home/hadoop/.m2/repository/org/apache/httpcomponents/httpcore/4.1.3/httpcore-4.1.3.jar:/home/hadoop/.m2/repository/org/apache/hadoop/hadoop-hdfs/2.2.0/hadoop-hdfs-2.2.0.jar:/home/hadoop/.m2/repository/org/mortbay/jetty/jetty/6.1.26/jetty-6.1.26.jar:/home/hadoop/.m2/repository/org/mortbay/jetty/jetty-util/6.1.26/jetty-util-6.1.26.jar:/home/hadoop/.m2/repository/com/sun/jersey/jersey-core/1.9/jersey-core-1.9.jar:/home/hadoop/.m2/repository/com/sun/jersey/jersey-server/1.9/jersey-server-1.9.jar:/home/hadoop/.m2/repository/asm/asm/3.1/asm-3.1.jar:/home/hadoop/.m2/repository/commons-cli/commons-cli/1.2/commons-cli-1.2.jar:/home/hadoop/.m2/repository/commons-daemon/commons-daemon/1.0.13/commons-daemon-1.0.13.jar:/home/hadoop/.m2/repository/javax/servlet/jsp/jsp-api/2.1/jsp-api-2.1.jar:/home/hadoop/.m2/repository/javax/servlet/servlet-api/2.5/servlet-api-2.5.jar:/home/hadoop/.m2/repository/org/codehaus/jackson/jackson-core-asl/1.8.8/jackson-core-asl-1.8.8.jar:/home/hadoop/.m2/repository/tomcat/jasper-runtime/5.5.23/jasper-runtime-5.5.23.jar:/home/hadoop/.m2/repository/xmlenc/xmlenc/0.52/xmlenc-0.52.jar:/home/hadoop/.m2/repository/log4j/log4j/1.2.17/log4j-1.2.17.jar:/home/hadoop/.m2/repository/junit/junit/4.12/junit-4.12.jar:/home/hadoop/.m2/repository/org/hamcrest/hamcrest-core/1.3/hamcrest-core-1.3.jar:/home/hadoop/work/DevelopTools/eclipse/configuration/org.eclipse.osgi/380/0/.cp/:/home/hadoop/work/DevelopTools/eclipse/configuration/org.eclipse.osgi/379/0/.cp/
[org.apache.zookeeper.ZooKeeper]Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
[org.apache.zookeeper.ZooKeeper]Client environment:java.io.tmpdir=/tmp
[org.apache.zookeeper.ZooKeeper]Client environment:java.compiler=<NA>
[org.apache.zookeeper.ZooKeeper]Client environment:os.name=Linux
[org.apache.zookeeper.ZooKeeper]Client environment:os.arch=amd64
[org.apache.zookeeper.ZooKeeper]Client environment:os.version=3.13.0-32-generic
[org.apache.zookeeper.ZooKeeper]Client environment:user.name=hadoop
[org.apache.zookeeper.ZooKeeper]Client environment:user.home=/home/hadoop
[org.apache.zookeeper.ZooKeeper]Client environment:user.dir=/home/hadoop/workspace/HbaseStudy-0.98.13
[org.apache.zookeeper.ZooKeeper]Initiating client connection, connectString=spark4:2181,spark3:2181,spark2:2181,spark1:2181,spark5:2181 sessionTimeout=90000 watcher=hconnection-0x18e4d5ba0x0, quorum=spark4:2181,spark3:2181,spark2:2181,spark1:2181,spark5:2181, baseZNode=/hbase
[org.apache.zookeeper.ClientCnxn]zookeeper.disableAutoWatchReset is false
[org.apache.zookeeper.ClientCnxn]Opening socket connection to server spark1/192.168.4.31:2181. Will not attempt to authenticate using SASL (unknown error)
[org.apache.zookeeper.ClientCnxn]Socket connection established to spark1/192.168.4.31:2181, initiating session
[org.apache.zookeeper.ClientCnxn]Session establishment request sent on spark1/192.168.4.31:2181
[org.apache.zookeeper.ClientCnxn]Session establishment complete on server spark1/192.168.4.31:2181, sessionid = 0x14f878806d5000c, negotiated timeout = 40000
[org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher]hconnection-0x18e4d5ba0x0, quorum=spark4:2181,spark3:2181,spark2:2181,spark1:2181,spark5:2181, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null
[org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher]hconnection-0x18e4d5ba-0x14f878806d5000c connected
[org.apache.zookeeper.ClientCnxn]Reading reply sessionid:0x14f878806d5000c, packet:: clientPath:null serverPath:null finished:false header:: 1,3  replyHeader:: 1,47244640502,0  request:: '/hbase/hbaseid,F  response:: s{42949673661,47244640264,1438670836817,1441088022742,4,0,0,0,67,0,42949673661} 
[org.apache.zookeeper.ClientCnxn]Reading reply sessionid:0x14f878806d5000c, packet:: clientPath:null serverPath:null finished:false header:: 2,4  replyHeader:: 2,47244640502,0  request:: '/hbase/hbaseid,F  response:: #ffffffff000146d61737465723a3630303030ffffffa5fffffff1ffffffe0fffffff3481346ffffffa450425546a2435313637666262392d356133622d346634382d386532302d366530623761383562373063,s{42949673661,47244640264,1438670836817,1441088022742,4,0,0,0,67,0,42949673661} 
[org.apache.hadoop.hdfs.BlockReaderLocal]dfs.client.use.legacy.blockreader.local = false
[org.apache.hadoop.hdfs.BlockReaderLocal]dfs.client.read.shortcircuit = false
[org.apache.hadoop.hdfs.BlockReaderLocal]dfs.client.domain.socket.data.traffic = false
[org.apache.hadoop.hdfs.BlockReaderLocal]dfs.domain.socket.path = 
[org.apache.hadoop.metrics2.impl.MetricsSystemImpl]StartupProgress, NameNode startup progress
[org.apache.hadoop.io.retry.RetryUtils]multipleLinearRandomRetry = null
[org.apache.hadoop.ipc.Server]rpcKind=RPC_PROTOCOL_BUFFER, rpcRequestWrapperClass=class org.apache.hadoop.ipc.ProtobufRpcEngine$RpcRequestWrapper, rpcInvoker=org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker@52474d7f
[org.apache.hadoop.hdfs.BlockReaderLocal]Both short-circuit local reads and UNIX domain socket are disabled.
[org.apache.hadoop.ipc.RpcClient]Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@62f73ff7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, maxIdleTime=10000, maxRetries=0, fallbackAllowed=false, ping interval=60000ms, bind address=null
[org.apache.hadoop.conf.Configuration.deprecation]hadoop.native.lib is deprecated. Instead, use io.native.lib.available
[org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper]Process identifier=catalogtracker-on-hconnection-0x18e4d5ba connecting to ZooKeeper ensemble=spark4:2181,spark3:2181,spark2:2181,spark1:2181,spark5:2181
[org.apache.zookeeper.ZooKeeper]Initiating client connection, connectString=spark4:2181,spark3:2181,spark2:2181,spark1:2181,spark5:2181 sessionTimeout=90000 watcher=catalogtracker-on-hconnection-0x18e4d5ba0x0, quorum=spark4:2181,spark3:2181,spark2:2181,spark1:2181,spark5:2181, baseZNode=/hbase
[org.apache.zookeeper.ClientCnxn]Opening socket connection to server spark4/192.168.4.34:2181. Will not attempt to authenticate using SASL (unknown error)
[org.apache.zookeeper.ClientCnxn]Socket connection established to spark4/192.168.4.34:2181, initiating session
[org.apache.hadoop.hbase.catalog.CatalogTracker]Starting catalog tracker org.apache.hadoop.hbase.catalog.CatalogTracker@21b7e734
[org.apache.zookeeper.ClientCnxn]Session establishment request sent on spark4/192.168.4.34:2181
[org.apache.zookeeper.ClientCnxn]Session establishment complete on server spark4/192.168.4.34:2181, sessionid = 0x44f8787f7de000c, negotiated timeout = 40000
[org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher]catalogtracker-on-hconnection-0x18e4d5ba0x0, quorum=spark4:2181,spark3:2181,spark2:2181,spark1:2181,spark5:2181, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null
[org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher]catalogtracker-on-hconnection-0x18e4d5ba-0x44f8787f7de000c connected
[org.apache.zookeeper.ClientCnxn]Reading reply sessionid:0x44f8787f7de000c, packet:: clientPath:null serverPath:null finished:false header:: 1,3  replyHeader:: 1,47244640503,0  request:: '/hbase/meta-region-server,T  response:: s{47244640306,47244640306,1441088028291,1441088028291,0,0,0,0,61,0,47244640306} 
[org.apache.hadoop.hbase.zookeeper.ZKUtil]catalogtracker-on-hconnection-0x18e4d5ba-0x44f8787f7de000c, quorum=spark4:2181,spark3:2181,spark2:2181,spark1:2181,spark5:2181, baseZNode=/hbase Set watcher on existing znode=/hbase/meta-region-server
[org.apache.zookeeper.ClientCnxn]Reading reply sessionid:0x44f8787f7de000c, packet:: clientPath:null serverPath:null finished:false header:: 2,4  replyHeader:: 2,47244640503,0  request:: '/hbase/meta-region-server,T  response:: #ffffffff0001a726567696f6e7365727665723a3630303230ffffff86ffffffeeffffffdd7f19ffffff85ffffffda1050425546a14a76861646f6f703110fffffff4ffffffd4318ffffffd4ffffffc8ffffffa8ffffffbcfffffff829100183,s{47244640306,47244640306,1441088028291,1441088028291,0,0,0,0,61,0,47244640306} 
[org.apache.zookeeper.ClientCnxn]Reading reply sessionid:0x14f878806d5000c, packet:: clientPath:null serverPath:null finished:false header:: 3,4  replyHeader:: 3,47244640503,0  request:: '/hbase/meta-region-server,F  response:: #ffffffff0001a726567696f6e7365727665723a3630303230ffffff86ffffffeeffffffdd7f19ffffff85ffffffda1050425546a14a76861646f6f703110fffffff4ffffffd4318ffffffd4ffffffc8ffffffa8ffffffbcfffffff829100183,s{47244640306,47244640306,1441088028291,1441088028291,0,0,0,0,61,0,47244640306} 
[org.apache.hadoop.ipc.RpcClient]Use SIMPLE authentication for service ClientService, sasl=false
[org.apache.hadoop.ipc.RpcClient]Connecting to hadoop1/192.168.4.21:60020
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: starting, connections 1
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: got response header call_id: 0, totalSize: 12 bytes
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: wrote request header call_id: 0 method_name: "Scan" request_param: true
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: wrote request header call_id: 1 method_name: "Scan" request_param: true priority: 100
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: got response header call_id: 1 cell_block_meta { length: 877 }, totalSize: 900 bytes
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: wrote request header call_id: 2 method_name: "Scan" request_param: true
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: got response header call_id: 2, totalSize: 8 bytes
[org.apache.hadoop.hbase.catalog.CatalogTracker]Stopping catalog tracker org.apache.hadoop.hbase.catalog.CatalogTracker@21b7e734
[org.apache.zookeeper.ZooKeeper]Closing session: 0x44f8787f7de000c
[org.apache.zookeeper.ClientCnxn]Closing client for session: 0x44f8787f7de000c
[org.apache.zookeeper.ClientCnxn]Reading reply sessionid:0x44f8787f7de000c, packet:: clientPath:null serverPath:null finished:false header:: 3,-11  replyHeader:: 3,47244640504,0  request:: null response:: null
[org.apache.zookeeper.ClientCnxn]Disconnecting client for session: 0x44f8787f7de000c
[org.apache.zookeeper.ZooKeeper]Session: 0x44f8787f7de000c closed
[org.apache.zookeeper.ClientCnxn]EventThread shut down
[org.apache.zookeeper.ClientCnxn]An exception was thrown while closing send thread for session 0x44f8787f7de000c : Unable to read additional data from server sessionid 0x44f8787f7de000c, likely server has closed socket
[org.apache.zookeeper.ClientCnxn]Reading reply sessionid:0x14f878806d5000c, packet:: clientPath:null serverPath:null finished:false header:: 4,3  replyHeader:: 4,47244640504,0  request:: '/hbase,F  response:: s{42949673651,42949673651,1438670836103,1438670836103,0,33,0,0,0,15,47244640306} 
[org.apache.zookeeper.ClientCnxn]Reading reply sessionid:0x14f878806d5000c, packet:: clientPath:null serverPath:null finished:false header:: 5,4  replyHeader:: 5,47244640504,0  request:: '/hbase/master,F  response:: #ffffffff000146d61737465723a3630303030ffffffecffffffa5ffffffc86effffff96fffffffcffffffa24f50425546a14a76861646f6f703110ffffffe0ffffffd4318ffffff98ffffffc3ffffffa8ffffffbcfffffff829100,s{47244640263,47244640263,1441088021642,1441088021642,0,0,0,310615917360709632,53,0,47244640263} 
[org.apache.hadoop.ipc.RpcClient]Use SIMPLE authentication for service MasterService, sasl=false
[org.apache.hadoop.ipc.RpcClient]Connecting to hadoop1/192.168.4.21:60000
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60000 from hadoop: wrote request header call_id: 3 method_name: "IsMasterRunning" request_param: true
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60000 from hadoop: starting, connections 2
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60000 from hadoop: got response header call_id: 3, totalSize: 6 bytes
[org.apache.hadoop.hbase.client.HBaseAdmin]Started disable of user1
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60000 from hadoop: wrote request header call_id: 4 method_name: "DisableTable" request_param: true
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60000 from hadoop: got response header call_id: 4, totalSize: 4 bytes
[org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper]Process identifier=catalogtracker-on-hconnection-0x18e4d5ba connecting to ZooKeeper ensemble=spark4:2181,spark3:2181,spark2:2181,spark1:2181,spark5:2181
[org.apache.zookeeper.ZooKeeper]Initiating client connection, connectString=spark4:2181,spark3:2181,spark2:2181,spark1:2181,spark5:2181 sessionTimeout=90000 watcher=catalogtracker-on-hconnection-0x18e4d5ba0x0, quorum=spark4:2181,spark3:2181,spark2:2181,spark1:2181,spark5:2181, baseZNode=/hbase
[org.apache.hadoop.hbase.catalog.CatalogTracker]Starting catalog tracker org.apache.hadoop.hbase.catalog.CatalogTracker@3c5d0d01
[org.apache.zookeeper.ClientCnxn]Opening socket connection to server spark3/192.168.4.33:2181. Will not attempt to authenticate using SASL (unknown error)
[org.apache.zookeeper.ClientCnxn]Socket connection established to spark3/192.168.4.33:2181, initiating session
[org.apache.zookeeper.ClientCnxn]Session establishment request sent on spark3/192.168.4.33:2181
[org.apache.zookeeper.ClientCnxn]Session establishment complete on server spark3/192.168.4.33:2181, sessionid = 0x34f8787fd3c0011, negotiated timeout = 40000
[org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher]catalogtracker-on-hconnection-0x18e4d5ba0x0, quorum=spark4:2181,spark3:2181,spark2:2181,spark1:2181,spark5:2181, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null
[org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher]catalogtracker-on-hconnection-0x18e4d5ba-0x34f8787fd3c0011 connected
[org.apache.zookeeper.ClientCnxn]Reading reply sessionid:0x34f8787fd3c0011, packet:: clientPath:null serverPath:null finished:false header:: 1,3  replyHeader:: 1,47244640509,0  request:: '/hbase/meta-region-server,T  response:: s{47244640306,47244640306,1441088028291,1441088028291,0,0,0,0,61,0,47244640306} 
[org.apache.hadoop.hbase.zookeeper.ZKUtil]catalogtracker-on-hconnection-0x18e4d5ba-0x34f8787fd3c0011, quorum=spark4:2181,spark3:2181,spark2:2181,spark1:2181,spark5:2181, baseZNode=/hbase Set watcher on existing znode=/hbase/meta-region-server
[org.apache.zookeeper.ClientCnxn]Reading reply sessionid:0x34f8787fd3c0011, packet:: clientPath:null serverPath:null finished:false header:: 2,4  replyHeader:: 2,47244640509,0  request:: '/hbase/meta-region-server,T  response:: #ffffffff0001a726567696f6e7365727665723a3630303230ffffff86ffffffeeffffffdd7f19ffffff85ffffffda1050425546a14a76861646f6f703110fffffff4ffffffd4318ffffffd4ffffffc8ffffffa8ffffffbcfffffff829100183,s{47244640306,47244640306,1441088028291,1441088028291,0,0,0,0,61,0,47244640306} 
[org.apache.zookeeper.ClientCnxn]Reading reply sessionid:0x14f878806d5000c, packet:: clientPath:null serverPath:null finished:false header:: 6,4  replyHeader:: 6,47244640509,0  request:: '/hbase/meta-region-server,F  response:: #ffffffff0001a726567696f6e7365727665723a3630303230ffffff86ffffffeeffffffdd7f19ffffff85ffffffda1050425546a14a76861646f6f703110fffffff4ffffffd4318ffffffd4ffffffc8ffffffa8ffffffbcfffffff829100183,s{47244640306,47244640306,1441088028291,1441088028291,0,0,0,0,61,0,47244640306} 
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: wrote request header call_id: 5 method_name: "Scan" request_param: true
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: got response header call_id: 5, totalSize: 12 bytes
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: wrote request header call_id: 6 method_name: "Scan" request_param: true priority: 100
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: got response header call_id: 6 cell_block_meta { length: 877 }, totalSize: 900 bytes
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: wrote request header call_id: 7 method_name: "Scan" request_param: true
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: got response header call_id: 7, totalSize: 8 bytes
[org.apache.hadoop.hbase.catalog.CatalogTracker]Stopping catalog tracker org.apache.hadoop.hbase.catalog.CatalogTracker@3c5d0d01
[org.apache.zookeeper.ZooKeeper]Closing session: 0x34f8787fd3c0011
[org.apache.zookeeper.ClientCnxn]Closing client for session: 0x34f8787fd3c0011
[org.apache.zookeeper.ClientCnxn]Reading reply sessionid:0x34f8787fd3c0011, packet:: clientPath:null serverPath:null finished:false header:: 3,-11  replyHeader:: 3,47244640510,0  request:: null response:: null
[org.apache.zookeeper.ClientCnxn]Disconnecting client for session: 0x34f8787fd3c0011
[org.apache.zookeeper.ZooKeeper]Session: 0x34f8787fd3c0011 closed
[org.apache.zookeeper.ClientCnxn]An exception was thrown while closing send thread for session 0x34f8787fd3c0011 : Unable to read additional data from server sessionid 0x34f8787fd3c0011, likely server has closed socket
[org.apache.zookeeper.ClientCnxn]EventThread shut down
[org.apache.zookeeper.ClientCnxn]Reading reply sessionid:0x14f878806d5000c, packet:: clientPath:null serverPath:null finished:false header:: 7,4  replyHeader:: 7,47244640511,0  request:: '/hbase/table/user1,F  response:: #ffffffff000146d61737465723a3630303030fffffff9761832ffffffe05dffffff98ffffffa85042554682,s{47244640452,47244640507,1441089997328,1441093181341,4,0,0,0,31,0,47244640452} 
[org.apache.hadoop.hbase.client.HBaseAdmin]Sleeping= 100ms, waiting for all regions to be disabled in user1
[org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper]Process identifier=catalogtracker-on-hconnection-0x18e4d5ba connecting to ZooKeeper ensemble=spark4:2181,spark3:2181,spark2:2181,spark1:2181,spark5:2181
[org.apache.zookeeper.ZooKeeper]Initiating client connection, connectString=spark4:2181,spark3:2181,spark2:2181,spark1:2181,spark5:2181 sessionTimeout=90000 watcher=catalogtracker-on-hconnection-0x18e4d5ba0x0, quorum=spark4:2181,spark3:2181,spark2:2181,spark1:2181,spark5:2181, baseZNode=/hbase
[org.apache.hadoop.hbase.catalog.CatalogTracker]Starting catalog tracker org.apache.hadoop.hbase.catalog.CatalogTracker@2356cab0
[org.apache.zookeeper.ClientCnxn]Opening socket connection to server spark5/192.168.4.35:2181. Will not attempt to authenticate using SASL (unknown error)
[org.apache.zookeeper.ClientCnxn]Socket connection established to spark5/192.168.4.35:2181, initiating session
[org.apache.zookeeper.ClientCnxn]Session establishment request sent on spark5/192.168.4.35:2181
[org.apache.zookeeper.ClientCnxn]Session establishment complete on server spark5/192.168.4.35:2181, sessionid = 0x54f8787fd180008, negotiated timeout = 40000
[org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher]catalogtracker-on-hconnection-0x18e4d5ba0x0, quorum=spark4:2181,spark3:2181,spark2:2181,spark1:2181,spark5:2181, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null
[org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher]catalogtracker-on-hconnection-0x18e4d5ba-0x54f8787fd180008 connected
[org.apache.zookeeper.ClientCnxn]Reading reply sessionid:0x54f8787fd180008, packet:: clientPath:null serverPath:null finished:false header:: 1,3  replyHeader:: 1,47244640513,0  request:: '/hbase/meta-region-server,T  response:: s{47244640306,47244640306,1441088028291,1441088028291,0,0,0,0,61,0,47244640306} 
[org.apache.hadoop.hbase.zookeeper.ZKUtil]catalogtracker-on-hconnection-0x18e4d5ba-0x54f8787fd180008, quorum=spark4:2181,spark3:2181,spark2:2181,spark1:2181,spark5:2181, baseZNode=/hbase Set watcher on existing znode=/hbase/meta-region-server
[org.apache.zookeeper.ClientCnxn]Reading reply sessionid:0x54f8787fd180008, packet:: clientPath:null serverPath:null finished:false header:: 2,4  replyHeader:: 2,47244640513,0  request:: '/hbase/meta-region-server,T  response:: #ffffffff0001a726567696f6e7365727665723a3630303230ffffff86ffffffeeffffffdd7f19ffffff85ffffffda1050425546a14a76861646f6f703110fffffff4ffffffd4318ffffffd4ffffffc8ffffffa8ffffffbcfffffff829100183,s{47244640306,47244640306,1441088028291,1441088028291,0,0,0,0,61,0,47244640306} 
[org.apache.zookeeper.ClientCnxn]Reading reply sessionid:0x14f878806d5000c, packet:: clientPath:null serverPath:null finished:false header:: 8,4  replyHeader:: 8,47244640513,0  request:: '/hbase/meta-region-server,F  response:: #ffffffff0001a726567696f6e7365727665723a3630303230ffffff86ffffffeeffffffdd7f19ffffff85ffffffda1050425546a14a76861646f6f703110fffffff4ffffffd4318ffffffd4ffffffc8ffffffa8ffffffbcfffffff829100183,s{47244640306,47244640306,1441088028291,1441088028291,0,0,0,0,61,0,47244640306} 
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: wrote request header call_id: 8 method_name: "Scan" request_param: true
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: got response header call_id: 8, totalSize: 12 bytes
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: wrote request header call_id: 9 method_name: "Scan" request_param: true priority: 100
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: got response header call_id: 9 cell_block_meta { length: 877 }, totalSize: 900 bytes
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: wrote request header call_id: 10 method_name: "Scan" request_param: true
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: got response header call_id: 10, totalSize: 8 bytes
[org.apache.hadoop.hbase.catalog.CatalogTracker]Stopping catalog tracker org.apache.hadoop.hbase.catalog.CatalogTracker@2356cab0
[org.apache.zookeeper.ZooKeeper]Closing session: 0x54f8787fd180008
[org.apache.zookeeper.ClientCnxn]Closing client for session: 0x54f8787fd180008
[org.apache.zookeeper.ClientCnxn]Reading reply sessionid:0x54f8787fd180008, packet:: clientPath:null serverPath:null finished:false header:: 3,-11  replyHeader:: 3,47244640514,0  request:: null response:: null
[org.apache.zookeeper.ClientCnxn]Disconnecting client for session: 0x54f8787fd180008
[org.apache.zookeeper.ClientCnxn]An exception was thrown while closing send thread for session 0x54f8787fd180008 : Unable to read additional data from server sessionid 0x54f8787fd180008, likely server has closed socket
[org.apache.zookeeper.ZooKeeper]Session: 0x54f8787fd180008 closed
[org.apache.zookeeper.ClientCnxn]EventThread shut down
[org.apache.zookeeper.ClientCnxn]Reading reply sessionid:0x14f878806d5000c, packet:: clientPath:null serverPath:null finished:false header:: 9,4  replyHeader:: 9,47244640514,0  request:: '/hbase/table/user1,F  response:: #ffffffff000146d61737465723a3630303030fffffff9761832ffffffe05dffffff98ffffffa85042554682,s{47244640452,47244640507,1441089997328,1441093181341,4,0,0,0,31,0,47244640452} 
[org.apache.hadoop.hbase.client.HBaseAdmin]Sleeping= 200ms, waiting for all regions to be disabled in user1
[org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper]Process identifier=catalogtracker-on-hconnection-0x18e4d5ba connecting to ZooKeeper ensemble=spark4:2181,spark3:2181,spark2:2181,spark1:2181,spark5:2181
[org.apache.zookeeper.ZooKeeper]Initiating client connection, connectString=spark4:2181,spark3:2181,spark2:2181,spark1:2181,spark5:2181 sessionTimeout=90000 watcher=catalogtracker-on-hconnection-0x18e4d5ba0x0, quorum=spark4:2181,spark3:2181,spark2:2181,spark1:2181,spark5:2181, baseZNode=/hbase
[org.apache.hadoop.hbase.catalog.CatalogTracker]Starting catalog tracker org.apache.hadoop.hbase.catalog.CatalogTracker@4bfed00f
[org.apache.zookeeper.ClientCnxn]Opening socket connection to server spark4/192.168.4.34:2181. Will not attempt to authenticate using SASL (unknown error)
[org.apache.zookeeper.ClientCnxn]Socket connection established to spark4/192.168.4.34:2181, initiating session
[org.apache.zookeeper.ClientCnxn]Session establishment request sent on spark4/192.168.4.34:2181
[org.apache.zookeeper.ClientCnxn]Session establishment complete on server spark4/192.168.4.34:2181, sessionid = 0x44f8787f7de000d, negotiated timeout = 40000
[org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher]catalogtracker-on-hconnection-0x18e4d5ba0x0, quorum=spark4:2181,spark3:2181,spark2:2181,spark1:2181,spark5:2181, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null
[org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher]catalogtracker-on-hconnection-0x18e4d5ba-0x44f8787f7de000d connected
[org.apache.zookeeper.ClientCnxn]Reading reply sessionid:0x44f8787f7de000d, packet:: clientPath:null serverPath:null finished:false header:: 1,3  replyHeader:: 1,47244640515,0  request:: '/hbase/meta-region-server,T  response:: s{47244640306,47244640306,1441088028291,1441088028291,0,0,0,0,61,0,47244640306} 
[org.apache.hadoop.hbase.zookeeper.ZKUtil]catalogtracker-on-hconnection-0x18e4d5ba-0x44f8787f7de000d, quorum=spark4:2181,spark3:2181,spark2:2181,spark1:2181,spark5:2181, baseZNode=/hbase Set watcher on existing znode=/hbase/meta-region-server
[org.apache.zookeeper.ClientCnxn]Reading reply sessionid:0x44f8787f7de000d, packet:: clientPath:null serverPath:null finished:false header:: 2,4  replyHeader:: 2,47244640515,0  request:: '/hbase/meta-region-server,T  response:: #ffffffff0001a726567696f6e7365727665723a3630303230ffffff86ffffffeeffffffdd7f19ffffff85ffffffda1050425546a14a76861646f6f703110fffffff4ffffffd4318ffffffd4ffffffc8ffffffa8ffffffbcfffffff829100183,s{47244640306,47244640306,1441088028291,1441088028291,0,0,0,0,61,0,47244640306} 
[org.apache.zookeeper.ClientCnxn]Reading reply sessionid:0x14f878806d5000c, packet:: clientPath:null serverPath:null finished:false header:: 10,4  replyHeader:: 10,47244640515,0  request:: '/hbase/meta-region-server,F  response:: #ffffffff0001a726567696f6e7365727665723a3630303230ffffff86ffffffeeffffffdd7f19ffffff85ffffffda1050425546a14a76861646f6f703110fffffff4ffffffd4318ffffffd4ffffffc8ffffffa8ffffffbcfffffff829100183,s{47244640306,47244640306,1441088028291,1441088028291,0,0,0,0,61,0,47244640306} 
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: wrote request header call_id: 11 method_name: "Scan" request_param: true
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: got response header call_id: 11, totalSize: 12 bytes
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: wrote request header call_id: 12 method_name: "Scan" request_param: true priority: 100
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: got response header call_id: 12 cell_block_meta { length: 877 }, totalSize: 900 bytes
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: wrote request header call_id: 13 method_name: "Scan" request_param: true
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: got response header call_id: 13, totalSize: 8 bytes
[org.apache.hadoop.hbase.catalog.CatalogTracker]Stopping catalog tracker org.apache.hadoop.hbase.catalog.CatalogTracker@4bfed00f
[org.apache.zookeeper.ZooKeeper]Closing session: 0x44f8787f7de000d
[org.apache.zookeeper.ClientCnxn]Closing client for session: 0x44f8787f7de000d
[org.apache.zookeeper.ClientCnxn]Reading reply sessionid:0x44f8787f7de000d, packet:: clientPath:null serverPath:null finished:false header:: 3,-11  replyHeader:: 3,47244640516,0  request:: null response:: null
[org.apache.zookeeper.ClientCnxn]Disconnecting client for session: 0x44f8787f7de000d
[org.apache.zookeeper.ZooKeeper]Session: 0x44f8787f7de000d closed
[org.apache.zookeeper.ClientCnxn]EventThread shut down
[org.apache.zookeeper.ClientCnxn]Reading reply sessionid:0x14f878806d5000c, packet:: clientPath:null serverPath:null finished:false header:: 11,4  replyHeader:: 11,47244640516,0  request:: '/hbase/table/user1,F  response:: #ffffffff000146d61737465723a3630303030fffffff9761832ffffffe05dffffff98ffffffa85042554682,s{47244640452,47244640507,1441089997328,1441093181341,4,0,0,0,31,0,47244640452} 
[org.apache.hadoop.hbase.client.HBaseAdmin]Sleeping= 300ms, waiting for all regions to be disabled in user1
[org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper]Process identifier=catalogtracker-on-hconnection-0x18e4d5ba connecting to ZooKeeper ensemble=spark4:2181,spark3:2181,spark2:2181,spark1:2181,spark5:2181
[org.apache.zookeeper.ZooKeeper]Initiating client connection, connectString=spark4:2181,spark3:2181,spark2:2181,spark1:2181,spark5:2181 sessionTimeout=90000 watcher=catalogtracker-on-hconnection-0x18e4d5ba0x0, quorum=spark4:2181,spark3:2181,spark2:2181,spark1:2181,spark5:2181, baseZNode=/hbase
[org.apache.hadoop.hbase.catalog.CatalogTracker]Starting catalog tracker org.apache.hadoop.hbase.catalog.CatalogTracker@608916f9
[org.apache.zookeeper.ClientCnxn]Opening socket connection to server spark3/192.168.4.33:2181. Will not attempt to authenticate using SASL (unknown error)
[org.apache.zookeeper.ClientCnxn]Socket connection established to spark3/192.168.4.33:2181, initiating session
[org.apache.zookeeper.ClientCnxn]Session establishment request sent on spark3/192.168.4.33:2181
[org.apache.zookeeper.ClientCnxn]Session establishment complete on server spark3/192.168.4.33:2181, sessionid = 0x34f8787fd3c0012, negotiated timeout = 40000
[org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher]catalogtracker-on-hconnection-0x18e4d5ba0x0, quorum=spark4:2181,spark3:2181,spark2:2181,spark1:2181,spark5:2181, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null
[org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher]catalogtracker-on-hconnection-0x18e4d5ba-0x34f8787fd3c0012 connected
[org.apache.zookeeper.ClientCnxn]Reading reply sessionid:0x34f8787fd3c0012, packet:: clientPath:null serverPath:null finished:false header:: 1,3  replyHeader:: 1,47244640517,0  request:: '/hbase/meta-region-server,T  response:: s{47244640306,47244640306,1441088028291,1441088028291,0,0,0,0,61,0,47244640306} 
[org.apache.hadoop.hbase.zookeeper.ZKUtil]catalogtracker-on-hconnection-0x18e4d5ba-0x34f8787fd3c0012, quorum=spark4:2181,spark3:2181,spark2:2181,spark1:2181,spark5:2181, baseZNode=/hbase Set watcher on existing znode=/hbase/meta-region-server
[org.apache.zookeeper.ClientCnxn]Reading reply sessionid:0x34f8787fd3c0012, packet:: clientPath:null serverPath:null finished:false header:: 2,4  replyHeader:: 2,47244640517,0  request:: '/hbase/meta-region-server,T  response:: #ffffffff0001a726567696f6e7365727665723a3630303230ffffff86ffffffeeffffffdd7f19ffffff85ffffffda1050425546a14a76861646f6f703110fffffff4ffffffd4318ffffffd4ffffffc8ffffffa8ffffffbcfffffff829100183,s{47244640306,47244640306,1441088028291,1441088028291,0,0,0,0,61,0,47244640306} 
[org.apache.zookeeper.ClientCnxn]Reading reply sessionid:0x14f878806d5000c, packet:: clientPath:null serverPath:null finished:false header:: 12,4  replyHeader:: 12,47244640517,0  request:: '/hbase/meta-region-server,F  response:: #ffffffff0001a726567696f6e7365727665723a3630303230ffffff86ffffffeeffffffdd7f19ffffff85ffffffda1050425546a14a76861646f6f703110fffffff4ffffffd4318ffffffd4ffffffc8ffffffa8ffffffbcfffffff829100183,s{47244640306,47244640306,1441088028291,1441088028291,0,0,0,0,61,0,47244640306} 
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: wrote request header call_id: 14 method_name: "Scan" request_param: true
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: got response header call_id: 14, totalSize: 12 bytes
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: got response header call_id: 15 cell_block_meta { length: 877 }, totalSize: 900 bytes
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: wrote request header call_id: 15 method_name: "Scan" request_param: true priority: 100
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: wrote request header call_id: 16 method_name: "Scan" request_param: true
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: got response header call_id: 16, totalSize: 8 bytes
[org.apache.hadoop.hbase.catalog.CatalogTracker]Stopping catalog tracker org.apache.hadoop.hbase.catalog.CatalogTracker@608916f9
[org.apache.zookeeper.ZooKeeper]Closing session: 0x34f8787fd3c0012
[org.apache.zookeeper.ClientCnxn]Closing client for session: 0x34f8787fd3c0012
[org.apache.zookeeper.ClientCnxn]Reading reply sessionid:0x34f8787fd3c0012, packet:: clientPath:null serverPath:null finished:false header:: 3,-11  replyHeader:: 3,47244640518,0  request:: null response:: null
[org.apache.zookeeper.ClientCnxn]An exception was thrown while closing send thread for session 0x34f8787fd3c0012 : Unable to read additional data from server sessionid 0x34f8787fd3c0012, likely server has closed socket
[org.apache.zookeeper.ClientCnxn]Disconnecting client for session: 0x34f8787fd3c0012
[org.apache.zookeeper.ZooKeeper]Session: 0x34f8787fd3c0012 closed
[org.apache.zookeeper.ClientCnxn]EventThread shut down
[org.apache.zookeeper.ClientCnxn]Reading reply sessionid:0x14f878806d5000c, packet:: clientPath:null serverPath:null finished:false header:: 13,4  replyHeader:: 13,47244640518,0  request:: '/hbase/table/user1,F  response:: #ffffffff000146d61737465723a3630303030fffffff9761832ffffffe05dffffff98ffffffa85042554682,s{47244640452,47244640507,1441089997328,1441093181341,4,0,0,0,31,0,47244640452} 
[org.apache.hadoop.hbase.client.HBaseAdmin]Sleeping= 500ms, waiting for all regions to be disabled in user1
[org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper]Process identifier=catalogtracker-on-hconnection-0x18e4d5ba connecting to ZooKeeper ensemble=spark4:2181,spark3:2181,spark2:2181,spark1:2181,spark5:2181
[org.apache.zookeeper.ZooKeeper]Initiating client connection, connectString=spark4:2181,spark3:2181,spark2:2181,spark1:2181,spark5:2181 sessionTimeout=90000 watcher=catalogtracker-on-hconnection-0x18e4d5ba0x0, quorum=spark4:2181,spark3:2181,spark2:2181,spark1:2181,spark5:2181, baseZNode=/hbase
[org.apache.zookeeper.ClientCnxn]Opening socket connection to server spark2/192.168.4.32:2181. Will not attempt to authenticate using SASL (unknown error)
[org.apache.hadoop.hbase.catalog.CatalogTracker]Starting catalog tracker org.apache.hadoop.hbase.catalog.CatalogTracker@c6424e2
[org.apache.zookeeper.ClientCnxn]Socket connection established to spark2/192.168.4.32:2181, initiating session
[org.apache.zookeeper.ClientCnxn]Session establishment request sent on spark2/192.168.4.32:2181
[org.apache.zookeeper.ClientCnxn]Session establishment complete on server spark2/192.168.4.32:2181, sessionid = 0x24f8787c243000d, negotiated timeout = 40000
[org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher]catalogtracker-on-hconnection-0x18e4d5ba0x0, quorum=spark4:2181,spark3:2181,spark2:2181,spark1:2181,spark5:2181, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null
[org.apache.zookeeper.ClientCnxn]Reading reply sessionid:0x24f8787c243000d, packet:: clientPath:null serverPath:null finished:false header:: 1,3  replyHeader:: 1,47244640521,0  request:: '/hbase/meta-region-server,T  response:: s{47244640306,47244640306,1441088028291,1441088028291,0,0,0,0,61,0,47244640306} 
[org.apache.hadoop.hbase.zookeeper.ZKUtil]catalogtracker-on-hconnection-0x18e4d5ba0x0, quorum=spark4:2181,spark3:2181,spark2:2181,spark1:2181,spark5:2181, baseZNode=/hbase Set watcher on existing znode=/hbase/meta-region-server
[org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher]catalogtracker-on-hconnection-0x18e4d5ba-0x24f8787c243000d connected
[org.apache.zookeeper.ClientCnxn]Reading reply sessionid:0x24f8787c243000d, packet:: clientPath:null serverPath:null finished:false header:: 2,4  replyHeader:: 2,47244640521,0  request:: '/hbase/meta-region-server,T  response:: #ffffffff0001a726567696f6e7365727665723a3630303230ffffff86ffffffeeffffffdd7f19ffffff85ffffffda1050425546a14a76861646f6f703110fffffff4ffffffd4318ffffffd4ffffffc8ffffffa8ffffffbcfffffff829100183,s{47244640306,47244640306,1441088028291,1441088028291,0,0,0,0,61,0,47244640306} 
[org.apache.zookeeper.ClientCnxn]Reading reply sessionid:0x14f878806d5000c, packet:: clientPath:null serverPath:null finished:false header:: 14,4  replyHeader:: 14,47244640521,0  request:: '/hbase/meta-region-server,F  response:: #ffffffff0001a726567696f6e7365727665723a3630303230ffffff86ffffffeeffffffdd7f19ffffff85ffffffda1050425546a14a76861646f6f703110fffffff4ffffffd4318ffffffd4ffffffc8ffffffa8ffffffbcfffffff829100183,s{47244640306,47244640306,1441088028291,1441088028291,0,0,0,0,61,0,47244640306} 
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: wrote request header call_id: 17 method_name: "Scan" request_param: true
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: got response header call_id: 17, totalSize: 12 bytes
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: wrote request header call_id: 18 method_name: "Scan" request_param: true priority: 100
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: got response header call_id: 18 cell_block_meta { length: 877 }, totalSize: 900 bytes
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: wrote request header call_id: 19 method_name: "Scan" request_param: true
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: got response header call_id: 19, totalSize: 8 bytes
[org.apache.hadoop.hbase.catalog.CatalogTracker]Stopping catalog tracker org.apache.hadoop.hbase.catalog.CatalogTracker@c6424e2
[org.apache.zookeeper.ZooKeeper]Closing session: 0x24f8787c243000d
[org.apache.zookeeper.ClientCnxn]Closing client for session: 0x24f8787c243000d
[org.apache.zookeeper.ClientCnxn]Reading reply sessionid:0x24f8787c243000d, packet:: clientPath:null serverPath:null finished:false header:: 3,-11  replyHeader:: 3,47244640522,0  request:: null response:: null
[org.apache.zookeeper.ClientCnxn]Disconnecting client for session: 0x24f8787c243000d
[org.apache.zookeeper.ClientCnxn]An exception was thrown while closing send thread for session 0x24f8787c243000d : Unable to read additional data from server sessionid 0x24f8787c243000d, likely server has closed socket
[org.apache.zookeeper.ZooKeeper]Session: 0x24f8787c243000d closed
[org.apache.zookeeper.ClientCnxn]EventThread shut down
[org.apache.zookeeper.ClientCnxn]Reading reply sessionid:0x14f878806d5000c, packet:: clientPath:null serverPath:null finished:false header:: 15,4  replyHeader:: 15,47244640522,0  request:: '/hbase/table/user1,F  response:: #ffffffff000146d61737465723a3630303030ffffff8b65ffffff9efffffff8ffffff8428ffffffdcfffffff95042554681,s{47244640452,47244640519,1441089997328,1441093182350,5,0,0,0,31,0,47244640452} 
[org.apache.hadoop.hbase.client.HBaseAdmin]Disabled user1
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60000 from hadoop: wrote request header call_id: 20 method_name: "IsMasterRunning" request_param: true
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60000 from hadoop: got response header call_id: 20, totalSize: 6 bytes
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60000 from hadoop: wrote request header call_id: 21 method_name: "DeleteTable" request_param: true
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60000 from hadoop: got response header call_id: 21, totalSize: 4 bytes
[org.apache.zookeeper.ClientCnxn]Reading reply sessionid:0x14f878806d5000c, packet:: clientPath:null serverPath:null finished:false header:: 16,4  replyHeader:: 16,47244640523,0  request:: '/hbase/meta-region-server,F  response:: #ffffffff0001a726567696f6e7365727665723a3630303230ffffff86ffffffeeffffffdd7f19ffffff85ffffffda1050425546a14a76861646f6f703110fffffff4ffffffd4318ffffffd4ffffffc8ffffffa8ffffffbcfffffff829100183,s{47244640306,47244640306,1441088028291,1441088028291,0,0,0,0,61,0,47244640306} 
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: wrote request header call_id: 22 method_name: "Scan" request_param: true
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: got response header call_id: 22, totalSize: 10 bytes
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60000 from hadoop: wrote request header call_id: 23 method_name: "IsMasterRunning" request_param: true
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60000 from hadoop: got response header call_id: 23, totalSize: 6 bytes
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60000 from hadoop: wrote request header call_id: 24 method_name: "GetTableDescriptors" request_param: true
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60000 from hadoop: got response header call_id: 24, totalSize: 538 bytes
[org.apache.zookeeper.ClientCnxn]Reading reply sessionid:0x14f878806d5000c, packet:: clientPath:null serverPath:null finished:false header:: 17,4  replyHeader:: 17,47244640526,0  request:: '/hbase/meta-region-server,F  response:: #ffffffff0001a726567696f6e7365727665723a3630303230ffffff86ffffffeeffffffdd7f19ffffff85ffffffda1050425546a14a76861646f6f703110fffffff4ffffffd4318ffffffd4ffffffc8ffffffa8ffffffbcfffffff829100183,s{47244640306,47244640306,1441088028291,1441088028291,0,0,0,0,61,0,47244640306} 
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: wrote request header call_id: 25 method_name: "Scan" request_param: true
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: got response header call_id: 25, totalSize: 10 bytes
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60000 from hadoop: wrote request header call_id: 26 method_name: "IsMasterRunning" request_param: true
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60000 from hadoop: got response header call_id: 26, totalSize: 6 bytes
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60000 from hadoop: wrote request header call_id: 27 method_name: "GetTableDescriptors" request_param: true
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60000 from hadoop: got response header call_id: 27, totalSize: 4 bytes
[org.apache.hadoop.hbase.client.HBaseAdmin]Deleted user1
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60000 from hadoop: wrote request header call_id: 28 method_name: "IsMasterRunning" request_param: true
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60000 from hadoop: got response header call_id: 28, totalSize: 6 bytes
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60000 from hadoop: wrote request header call_id: 29 method_name: "CreateTable" request_param: true
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60000 from hadoop: got response header call_id: 29, totalSize: 4 bytes
[org.apache.zookeeper.ClientCnxn]Reading reply sessionid:0x14f878806d5000c, packet:: clientPath:null serverPath:null finished:false header:: 18,4  replyHeader:: 18,47244640531,0  request:: '/hbase/meta-region-server,F  response:: #ffffffff0001a726567696f6e7365727665723a3630303230ffffff86ffffffeeffffffdd7f19ffffff85ffffffda1050425546a14a76861646f6f703110fffffff4ffffffd4318ffffffd4ffffffc8ffffffa8ffffffbcfffffff829100183,s{47244640306,47244640306,1441088028291,1441088028291,0,0,0,0,61,0,47244640306} 
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: wrote request header call_id: 30 method_name: "Scan" request_param: true
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: got response header call_id: 30, totalSize: 12 bytes
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: wrote request header call_id: 31 method_name: "Scan" request_param: true priority: 100
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: got response header call_id: 31 cell_block_meta { length: 436 }, totalSize: 457 bytes
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: wrote request header call_id: 32 method_name: "Scan" request_param: true
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: got response header call_id: 32, totalSize: 8 bytes
[org.apache.zookeeper.ClientCnxn]Reading reply sessionid:0x14f878806d5000c, packet:: clientPath:null serverPath:null finished:false header:: 19,4  replyHeader:: 19,47244640531,0  request:: '/hbase/meta-region-server,F  response:: #ffffffff0001a726567696f6e7365727665723a3630303230ffffff86ffffffeeffffffdd7f19ffffff85ffffffda1050425546a14a76861646f6f703110fffffff4ffffffd4318ffffffd4ffffffc8ffffffa8ffffffbcfffffff829100183,s{47244640306,47244640306,1441088028291,1441088028291,0,0,0,0,61,0,47244640306} 
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: wrote request header call_id: 33 method_name: "Scan" request_param: true
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: got response header call_id: 33, totalSize: 12 bytes
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: wrote request header call_id: 34 method_name: "Scan" request_param: true priority: 100
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: got response header call_id: 34 cell_block_meta { length: 436 }, totalSize: 457 bytes
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: wrote request header call_id: 35 method_name: "Scan" request_param: true
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: got response header call_id: 35, totalSize: 8 bytes
[org.apache.zookeeper.ClientCnxn]Reading reply sessionid:0x14f878806d5000c, packet:: clientPath:null serverPath:null finished:false header:: 20,4  replyHeader:: 20,47244640531,0  request:: '/hbase/meta-region-server,F  response:: #ffffffff0001a726567696f6e7365727665723a3630303230ffffff86ffffffeeffffffdd7f19ffffff85ffffffda1050425546a14a76861646f6f703110fffffff4ffffffd4318ffffffd4ffffffc8ffffffa8ffffffbcfffffff829100183,s{47244640306,47244640306,1441088028291,1441088028291,0,0,0,0,61,0,47244640306} 
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: wrote request header call_id: 36 method_name: "Scan" request_param: true
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: got response header call_id: 36, totalSize: 12 bytes
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: wrote request header call_id: 37 method_name: "Scan" request_param: true priority: 100
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: got response header call_id: 37 cell_block_meta { length: 436 }, totalSize: 457 bytes
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: wrote request header call_id: 38 method_name: "Scan" request_param: true
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: got response header call_id: 38, totalSize: 8 bytes
[org.apache.zookeeper.ClientCnxn]Reading reply sessionid:0x14f878806d5000c, packet:: clientPath:null serverPath:null finished:false header:: 21,4  replyHeader:: 21,47244640531,0  request:: '/hbase/meta-region-server,F  response:: #ffffffff0001a726567696f6e7365727665723a3630303230ffffff86ffffffeeffffffdd7f19ffffff85ffffffda1050425546a14a76861646f6f703110fffffff4ffffffd4318ffffffd4ffffffc8ffffffa8ffffffbcfffffff829100183,s{47244640306,47244640306,1441088028291,1441088028291,0,0,0,0,61,0,47244640306} 
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: wrote request header call_id: 39 method_name: "Scan" request_param: true
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: got response header call_id: 39, totalSize: 12 bytes
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: wrote request header call_id: 40 method_name: "Scan" request_param: true priority: 100
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: got response header call_id: 40 cell_block_meta { length: 436 }, totalSize: 457 bytes
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: wrote request header call_id: 41 method_name: "Scan" request_param: true
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: got response header call_id: 41, totalSize: 8 bytes
[org.apache.zookeeper.ClientCnxn]Reading reply sessionid:0x14f878806d5000c, packet:: clientPath:null serverPath:null finished:false header:: 22,4  replyHeader:: 22,47244640532,0  request:: '/hbase/meta-region-server,F  response:: #ffffffff0001a726567696f6e7365727665723a3630303230ffffff86ffffffeeffffffdd7f19ffffff85ffffffda1050425546a14a76861646f6f703110fffffff4ffffffd4318ffffffd4ffffffc8ffffffa8ffffffbcfffffff829100183,s{47244640306,47244640306,1441088028291,1441088028291,0,0,0,0,61,0,47244640306} 
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: wrote request header call_id: 42 method_name: "Scan" request_param: true
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: got response header call_id: 42, totalSize: 12 bytes
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: wrote request header call_id: 43 method_name: "Scan" request_param: true priority: 100
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: got response header call_id: 43 cell_block_meta { length: 565 }, totalSize: 588 bytes
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: wrote request header call_id: 44 method_name: "Scan" request_param: true
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: got response header call_id: 44, totalSize: 8 bytes
[org.apache.zookeeper.ClientCnxn]Reading reply sessionid:0x14f878806d5000c, packet:: clientPath:null serverPath:null finished:false header:: 23,4  replyHeader:: 23,47244640537,0  request:: '/hbase/meta-region-server,F  response:: #ffffffff0001a726567696f6e7365727665723a3630303230ffffff86ffffffeeffffffdd7f19ffffff85ffffffda1050425546a14a76861646f6f703110fffffff4ffffffd4318ffffffd4ffffffc8ffffffa8ffffffbcfffffff829100183,s{47244640306,47244640306,1441088028291,1441088028291,0,0,0,0,61,0,47244640306} 
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: wrote request header call_id: 45 method_name: "Scan" request_param: true
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: got response header call_id: 45, totalSize: 12 bytes
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: wrote request header call_id: 46 method_name: "Scan" request_param: true priority: 100
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: got response header call_id: 46 cell_block_meta { length: 877 }, totalSize: 900 bytes
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: wrote request header call_id: 47 method_name: "Scan" request_param: true
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: got response header call_id: 47, totalSize: 8 bytes
[org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper]Process identifier=catalogtracker-on-hconnection-0x18e4d5ba connecting to ZooKeeper ensemble=spark4:2181,spark3:2181,spark2:2181,spark1:2181,spark5:2181
[org.apache.zookeeper.ZooKeeper]Initiating client connection, connectString=spark4:2181,spark3:2181,spark2:2181,spark1:2181,spark5:2181 sessionTimeout=90000 watcher=catalogtracker-on-hconnection-0x18e4d5ba0x0, quorum=spark4:2181,spark3:2181,spark2:2181,spark1:2181,spark5:2181, baseZNode=/hbase
[org.apache.hadoop.hbase.catalog.CatalogTracker]Starting catalog tracker org.apache.hadoop.hbase.catalog.CatalogTracker@20828fe4
[org.apache.zookeeper.ClientCnxn]Opening socket connection to server spark1/192.168.4.31:2181. Will not attempt to authenticate using SASL (unknown error)
[org.apache.zookeeper.ClientCnxn]Socket connection established to spark1/192.168.4.31:2181, initiating session
[org.apache.zookeeper.ClientCnxn]Session establishment request sent on spark1/192.168.4.31:2181
[org.apache.zookeeper.ClientCnxn]Session establishment complete on server spark1/192.168.4.31:2181, sessionid = 0x14f878806d5000d, negotiated timeout = 40000
[org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher]catalogtracker-on-hconnection-0x18e4d5ba0x0, quorum=spark4:2181,spark3:2181,spark2:2181,spark1:2181,spark5:2181, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null
[org.apache.zookeeper.ClientCnxn]Reading reply sessionid:0x14f878806d5000d, packet:: clientPath:null serverPath:null finished:false header:: 1,3  replyHeader:: 1,47244640538,0  request:: '/hbase/meta-region-server,T  response:: s{47244640306,47244640306,1441088028291,1441088028291,0,0,0,0,61,0,47244640306} 
[org.apache.hadoop.hbase.zookeeper.ZKUtil]catalogtracker-on-hconnection-0x18e4d5ba0x0, quorum=spark4:2181,spark3:2181,spark2:2181,spark1:2181,spark5:2181, baseZNode=/hbase Set watcher on existing znode=/hbase/meta-region-server
[org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher]catalogtracker-on-hconnection-0x18e4d5ba-0x14f878806d5000d connected
[org.apache.zookeeper.ClientCnxn]Reading reply sessionid:0x14f878806d5000d, packet:: clientPath:null serverPath:null finished:false header:: 2,4  replyHeader:: 2,47244640538,0  request:: '/hbase/meta-region-server,T  response:: #ffffffff0001a726567696f6e7365727665723a3630303230ffffff86ffffffeeffffffdd7f19ffffff85ffffffda1050425546a14a76861646f6f703110fffffff4ffffffd4318ffffffd4ffffffc8ffffffa8ffffffbcfffffff829100183,s{47244640306,47244640306,1441088028291,1441088028291,0,0,0,0,61,0,47244640306} 
[org.apache.zookeeper.ClientCnxn]Reading reply sessionid:0x14f878806d5000c, packet:: clientPath:null serverPath:null finished:false header:: 24,4  replyHeader:: 24,47244640538,0  request:: '/hbase/meta-region-server,F  response:: #ffffffff0001a726567696f6e7365727665723a3630303230ffffff86ffffffeeffffffdd7f19ffffff85ffffffda1050425546a14a76861646f6f703110fffffff4ffffffd4318ffffffd4ffffffc8ffffffa8ffffffbcfffffff829100183,s{47244640306,47244640306,1441088028291,1441088028291,0,0,0,0,61,0,47244640306} 
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: wrote request header call_id: 48 method_name: "Scan" request_param: true
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: got response header call_id: 48, totalSize: 13 bytes
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: wrote request header call_id: 49 method_name: "Scan" request_param: true priority: 100
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: got response header call_id: 49 cell_block_meta { length: 877 }, totalSize: 901 bytes
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: wrote request header call_id: 50 method_name: "Scan" request_param: true
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: got response header call_id: 50, totalSize: 9 bytes
[org.apache.hadoop.hbase.catalog.CatalogTracker]Stopping catalog tracker org.apache.hadoop.hbase.catalog.CatalogTracker@20828fe4
[org.apache.zookeeper.ZooKeeper]Closing session: 0x14f878806d5000d
[org.apache.zookeeper.ClientCnxn]Closing client for session: 0x14f878806d5000d
[org.apache.zookeeper.ClientCnxn]Reading reply sessionid:0x14f878806d5000d, packet:: clientPath:null serverPath:null finished:false header:: 3,-11  replyHeader:: 3,47244640539,0  request:: null response:: null
[org.apache.zookeeper.ClientCnxn]Disconnecting client for session: 0x14f878806d5000d
[org.apache.zookeeper.ZooKeeper]Session: 0x14f878806d5000d closed
[org.apache.zookeeper.ClientCnxn]EventThread shut down
[org.apache.zookeeper.ClientCnxn]Reading reply sessionid:0x14f878806d5000c, packet:: clientPath:null serverPath:null finished:false header:: 25,4  replyHeader:: 25,47244640539,0  request:: '/hbase/table/user1,F  response:: #ffffffff000146d61737465723a36303030304b2fffffffc42632b4fffffffa75042554680,s{47244640530,47244640533,1441093183046,1441093184242,2,0,0,0,31,0,47244640530} 
[com.hua.hbase.test.TestHbase]Create table success!
[org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation]Closing master protocol: MasterService
[org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation]Closing zookeeper sessionid=0x14f878806d5000c
[org.apache.zookeeper.ZooKeeper]Closing session: 0x14f878806d5000c
[org.apache.zookeeper.ClientCnxn]Closing client for session: 0x14f878806d5000c
[org.apache.zookeeper.ClientCnxn]Reading reply sessionid:0x14f878806d5000c, packet:: clientPath:null serverPath:null finished:false header:: 26,-11  replyHeader:: 26,47244640540,0  request:: null response:: null
[org.apache.zookeeper.ClientCnxn]Disconnecting client for session: 0x14f878806d5000c
[org.apache.zookeeper.ClientCnxn]An exception was thrown while closing send thread for session 0x14f878806d5000c : Unable to read additional data from server sessionid 0x14f878806d5000c, likely server has closed socket
[org.apache.zookeeper.ClientCnxn]EventThread shut down
[org.apache.zookeeper.ZooKeeper]Session: 0x14f878806d5000c closed
[org.apache.hadoop.ipc.RpcClient]Stopping rpc client
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60000 from hadoop: closed
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: closed
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60020 from hadoop: stopped, connections 0
[org.apache.hadoop.ipc.RpcClient]IPC Client (791957027) connection to hadoop1/192.168.4.21:60000 from hadoop: stopped, connections 0
[org.apache.hadoop.ipc.Client]Stopping client





  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
### 回答1: 很高兴为您提供关于spark+eclipse+java+maven在windows开发环境搭建及入门实例的详细步骤和代码。 1. 安装Java JDK 首先,您需要在Windows上安装Java JDK。您可以从Oracle官方网站下载Java JDK并按照提示进行安装。 2. 安装Eclipse 接下来,您需要安装Eclipse IDE。您可以从Eclipse官方网站下载Eclipse并按照提示进行安装。 3. 安装Maven 然后,您需要安装Maven。您可以从Maven官方网站下载Maven并按照提示进行安装。 4. 创建Maven项目 在Eclipse中,选择“File” -> “New” -> “Other”,然后选择“Maven” -> “Maven Project”。在“New Maven Project”对话框中,选择“Create a simple project”并单击“Next”。在“New Maven Project”对话框中,输入“Group Id”和“Artifact Id”,然后单击“Finish”。 5. 添加Spark依赖项 在Eclipse中,打开您的Maven项目的pom.xml文件。在<dependencies>标签中,添加以下Spark依赖项: ``` <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-core_2.11</artifactId> <version>2.4.5</version> </dependency> ``` 6. 创建Spark应用程序 在Eclipse中,创建一个新的Java类。在该类中,编写您的Spark应用程序代码。以下是一个简单的Spark应用程序示例: ``` import org.apache.spark.SparkConf; import org.apache.spark.api.java.JavaRDD; import org.apache.spark.api.java.JavaSparkContext; public class SparkApp { public static void main(String[] args) { SparkConf conf = new SparkConf().setAppName("SparkApp").setMaster("local"); JavaSparkContext sc = new JavaSparkContext(conf); JavaRDD<String> lines = sc.textFile("input.txt"); JavaRDD<String> words = lines.flatMap(line -> Arrays.asList(line.split(" ")).iterator()); JavaPairRDD<String, Integer> pairs = words.mapToPair(word -> new Tuple2<>(word, 1)); JavaPairRDD<String, Integer> counts = pairs.reduceByKey((a, b) -> a + b); counts.saveAsTextFile("output.txt"); sc.stop(); } } ``` 7. 运行Spark应用程序 在Eclipse中,右键单击您的Spark应用程序类并选择“Run As” -> “Java Application”。您的Spark应用程序将在本地模式下运行。 以上就是关于spark+eclipse+java+maven在windows开发环境搭建及入门实例的详细步骤和代码。希望对您有所帮助! ### 回答2: Spark Eclipse Java Maven Windows 开发环境搭建及入门实例 Spark是一个开源的大数据处理框架,能够快速处理大量的数据。而Eclipse是一款强大的集成开发环境(IDE),可以帮助开发人员更高效地编写代码。Maven是一个项目管理工具,能够快速搭建开发环境。在Windows系统中,搭建Spark Eclipse Java Maven开发环境可分为以下几个步骤: 1. 安装Java 首先访问Oracle官网,下载Java的安装包。安装完成后,打开CMD命令控制台,并输入以下代码: java -version 如果显示Java版本,则表示安装成功。 2. 安装Eclipse 访问Eclipse官网,下载Eclipse的安装包,并解压缩到任意文件夹中。然后启动Eclipse,选择一个工作空间,即可以开始使用。 3. 安装Maven 访问Maven官网,选择下载对应的Maven安装包。将安装包解压缩到任意文件夹中,并配置环境变量。打开CMD命令控制台,并输入以下代码: mvn -version 如果显示Maven版本,则表示安装成功。 4. 创建Spark项目 在Eclipse中,选择“File”->“New”->“Project”,选择“Maven”文件夹下的“Maven Project”,点击“Next”按钮。接着选择“Create a simple project”,填写“Group Id”和“Artifact Id”,并选择“Packaging”为“jar”,点击“Finish”按钮。这样就可以创建一个Maven项目了。 5. 配置Maven 打开项目中的pom.xml文件,添加以下代码: <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-core_2.10</artifactId> <version>1.6.1</version> </dependency> 这里引入了Spark的core组件。在Eclipse中,右键单击项目,选择“Run As”->“Maven Install”,即可构建项目。 6. 创建Java类 在src/main/java文件夹下创建一个Java文件,命名为“SparkApp.java”。在文件中编写以下代码: import org.apache.spark.SparkConf; import org.apache.spark.api.java.JavaRDD; import org.apache.spark.api.java.JavaSparkContext; import java.util.Arrays; public class SparkApp { public static void main(String[] args) { SparkConf conf = new SparkConf().setAppName("SparkApp").setMaster("local"); JavaSparkContext sc = new JavaSparkContext(conf); JavaRDD<Integer> rdd = sc.parallelize(Arrays.asList(1, 2, 3, 4, 5)); int sum = rdd.reduce((x, y) -> x + y); System.out.println(sum); } } 这个Spark程序实现了对数组中的数字求和,并输出结果。 7. 运行Spark程序 在Eclipse中,右键单击Java文件,选择“Run As”->“Java Application”,即可运行程序。在控制台中可以看到求和的结果为15。 以上就是Spark Eclipse Java Maven Windows开发环境搭建及入门示例。后续可以继续深入了解Spark的相关知识,不断提升自己的技能。 ### 回答3: 首先,在Windows上搭建Spark开发环境需要准备以下组件: 1. Java开发环境 2. Spark下载及配置 3. Maven下载及配置 4. Eclipse安装及配置 Java开发环境: 1. 下载Java Development Kit(JDK)并安装 2. 设置JAVA_HOME环境变量 3. 修改PATH环境变量,将JDK的bin目录添加到环境变量中 4. 验证Java安装成功 Spark下载及配置: 1. 下载Spark并解压 2. 打开conf目录下的spark-env.cmd.template文件,将文件名改为spark-env.cmd,并在其中加入如下代码: ``` set SPARK_HOME=<解压后的Spark目录> ``` 3. 验证Spark配置成功 Maven下载及配置: 1. 下载Maven并解压 2. 设置MAVEN_HOME环境变量 3. 修改PATH环境变量,将Maven的bin目录添加到环境变量中 4. 验证Maven安装成功 Eclipse安装及配置: 1. 下载并安装Eclipse 2. 安装Scala插件 3. 安装Maven插件 4. 设定Spark项目需要的依赖项 代码实例: 1. 创建Maven项目 2. 添加Spark依赖项到pom.xml文件中: ``` <dependencies> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-core_2.10</artifactId> <version>1.6.0</version> </dependency> </dependencies> ``` 3. 创建Spark Application类并添加代码: ``` import org.apache.spark.SparkConf; import org.apache.spark.api.java.JavaRDD; import org.apache.spark.api.java.JavaSparkContext; public class SimpleApp { public static void main(String[] args) { String logFile = "<文件路径>"; //文件路径 SparkConf conf = new SparkConf().setAppName("Simple Application"); //创建SparkConf对象 JavaSparkContext sc = new JavaSparkContext(conf); //创建SparkContext对象 JavaRDD<String> logData = sc.textFile(logFile).cache(); //读取文件 long numAs = logData.filter(line -> line.contains("a")).count(); //计算包含"a"的行数 long numBs = logData.filter(line -> line.contains("b")).count(); //计算包含"b"的行数 System.out.println("Lines with a: " + numAs + ", lines with b: " + numBs); //输出结果 sc.stop(); //结束SparkContext } } ``` 以上就是Spark Eclipse Java Maven Windows开发环境搭建及入门实例,希望对大家有所帮助。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值