Hive单用户模式搭建

单用户模式是通过网络连接到一个数据库中,是最经常使用到的模式。

使用hive的前提:

(1)启动hadoop集群

(2)启动mysql服务

节点规划:

 

hadoop01为mysql存放元数据

hadoop02搭建hive单用户模式

 

搭建步骤:

 

 

1.上传好tar包,后解压:

tar -zxvf apache-hive-1.2.1-bin.tar.gz

把解压好的文件移动到/opt/software目录

 mv apache-hive-1.2.1-bin hive-1.2.1

 mv hive-1.2.1/ /opt/software/

 

 2. 配置环境变量

 

记得顺便重启配置文件:

 . /etc/profile

 3. 配置文件hive-site.xml

 cp hive-default.xml.template hive-site.xml

 这个命令可以把光标所在行以下除了倒数第一行都删了。

<configuration>  

<property>  

  <name>hive.metastore.warehouse.dir</name>  

  <value>/user/hive_remote/warehouse</value>  

</property>  

 <property>  

  <name>hive.metastore.local</name>  

  <value>true</value>  

</property>  

<property>  

  <name>javax.jdo.option.ConnectionURL</name>  

  <value>jdbc:mysql://hadoop01/hive_remote?createDatabaseIfNotExist=true</value>  

</property>  

<property>  

  <name>javax.jdo.option.ConnectionDriverName</name>  

  <value>com.mysql.jdbc.Driver</value>  

</property>  

<property>  

  <name>javax.jdo.option.ConnectionUserName</name>  

  <value>root</value>  

</property>  

<property>  

  <name>javax.jdo.option.ConnectionPassword</name>  

  <value>123</value>  

</property>  

</configuration>  

 

4. 安装mysql驱动包

 

把jar包移动到hive的jar包目录

mv mysql-connector-java-5.1.32-bin.jar /opt/software/hive-1.2.1/lib/

 5. 把hive中的jline的jar包拷贝到hadoop的

(/opt/software/hadoop-2.6.5/share/hadoop/yarn/lib/)目录的下,并把其中低版本的删掉,不然会报如下错:

 Logging initialized using configuration in

jar:file:/opt/software/hive-1.2.1/lib/hive-common-1.2.1.jar!/hive-log4j.properties

[ERROR] Terminal initialization failed; falling back to unsupported

java.lang.IncompatibleClassChangeError: Found class jline.Terminal, but interface was expected

at jline.TerminalFactory.create(TerminalFactory.java:101)

at jline.TerminalFactory.get(TerminalFactory.java:158)

at jline.console.ConsoleReader.<init>(ConsoleReader.java:229)

at jline.console.ConsoleReader.<init>(ConsoleReader.java:221)

at jline.console.ConsoleReader.<init>(ConsoleReader.java:209)

at org.apache.hadoop.hive.cli.CliDriver.setupConsoleReader(CliDriver.java:787)

at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:721)

at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681)

at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:606)

at org.apache.hadoop.util.RunJar.run(RunJar.java:221)

at org.apache.hadoop.util.RunJar.main(RunJar.java:136)

 

Exception in thread "main" java.lang.IncompatibleClassChangeError: Found class jline.Terminal, but interface was expected

at jline.console.ConsoleReader.<init>(ConsoleReader.java:230)

at jline.console.ConsoleReader.<init>(ConsoleReader.java:221)

at jline.console.ConsoleReader.<init>(ConsoleReader.java:209)

at org.apache.hadoop.hive.cli.CliDriver.setupConsoleReader(CliDriver.java:787)

at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:721)

at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681)

at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:606)

at org.apache.hadoop.util.RunJar.run(RunJar.java:221)

at org.apache.hadoop.util.RunJar.main(RunJar.java:136)

 

[root@hadoop02 lib]# cp /opt/software/hive-1.2.1/lib/jline-2.12.jar ./

注意:Hive2.x 如果你在使用mysql之前启动过Hive,注意使用下面的命令将元数据同步到mysql,不然会报错:

schematool -dbType mysql -initSchema

 6. 验证是否配置成功:

hive> create table test01(id int,age int);

OK

Time taken: 2.148 seconds

hive> desc test01;

OK

id                   int                                      

age                  int                                      

Time taken: 0.486 seconds, Fetched: 2 row(s)

hive> insert into test01 values(1,23);

Query ID = root_20180429203346_b7a310a3-8ca0-4e2b-af9d-f55486352a98

Total jobs = 3

Launching Job 1 out of 3

Number of reduce tasks is set to 0 since there's no reduce operator

Starting Job = job_1525003393708_0001, Tracking URL = http://hadoop03:8088/proxy/application_1525003393708_0001/

Kill Command = /opt/software/hadoop-2.6.5/bin/hadoop job  -kill job_1525003393708_0001

Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0

2018-04-29 20:34:19,531 Stage-1 map = 0%,  reduce = 0%

2018-04-29 20:35:01,432 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 1.9 sec

MapReduce Total cumulative CPU time: 1 seconds 900 msec

Ended Job = job_1525003393708_0001

Stage-4 is selected by condition resolver.

Stage-3 is filtered out by condition resolver.

Stage-5 is filtered out by condition resolver.

Moving data to: hdfs://mycluster/user/hive_remote/warehouse/test01/.hive-staging_hive_2018-04-29_20-33-46_610_4511768652560755389-1/-ext-10000

Loading data to table default.test01

Table default.test01 stats: [numFiles=1, numRows=1, totalSize=5, rawDataSize=4]

MapReduce Jobs Launched:

Stage-Stage-1: Map: 1   Cumulative CPU: 1.9 sec   HDFS Read: 3548 HDFS Write: 75 SUCCESS

Total MapReduce CPU Time Spent: 1 seconds 900 msec

OK

Time taken: 78.059 seconds

hive>

 

 到hdfs表的存储路径查看:

 

使用命令查看这个文件

 

 当然也可以登录mysql查看元数据信息

 

表名:

 

字段:

 

 

 

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

SunnyRivers

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值