首先应配置hive文件,一共有如下六个文件。
在根目录下输入gedit .bashrc 改一下环境变量
# Source global definitions
if [ -f /etc/bashrc ]; then
. /etc/bashrc
fi
# User specific aliases and functions
export JAVA_HOME=/usr/lib/jvm/java-openjdk
export MAVEN_HOME=/home/hadoop/maven
export ANT_HOME=/home/hadoop/antexport
export HIVE_HOME=/home/hadoop/hive
export SPARK_HOME=/home/hadoop/spark
export HADOOP_HOME=/home/hadoop/hadoop
export PATH=$PATH:$JAVA_HOME/bin:$HIVE_HOME/bin:$SPARK_HOME/bin:$MAVEN_HOME/bin:/home/hadoop/protobuf/bin:$ANT_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:/home/hadoop/hbase/bin:/home/hadoop/redis/bin:.
export CLASSPATH=.:$JAVA_HOME/lib/tools.jar:$JAVA_HOME/lib/dt.jar
配置完成
启动配置
[hadoop@master ~]$ start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [master]
master: starting namenode, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-namenode-master.out
master: starting datanode, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-datanode-master.out
Starting secondary namenodes [master]
master: starting secondarynamenode, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-secondarynamenode-master.out
starting yarn daemons
starting resourcemanager, logging to /home/hadoop/hadoop/logs/yarn-hadoop-resourcemanager-master.out
master: starting nodemanager, logging to /home/hadoop/hadoop/logs/yarn-hadoop-nodemanager-master.out
[hadoop@master ~]$ jps
4919 SecondaryNameNode
5069 ResourceManager
5170 NodeManager
4637 NameNode
5395 Jps
4737 DataNode
启动hive服务,用于连接jdbc使用
[hadoop@master ~]$ hive --service hiveserver &
[1] 5494
[hadoop@master ~]$ Starting Hive Thrift Server
hive
Logging initialized using configuration in file:/home/hadoop/hive/conf/hive-log4j.properties
hive> exit;
[hadoop@master ~]$ OK
OK
Copying data from file:/home/hadoop/hadoop/input/hive_test.log
Copying file: file:/home/hadoop/hadoop/input/hive_test.log
Loading data to table default.testhivedrivertable
Table default.testhivedrivertable stats: [numFiles=1, numRows=0, totalSize=811, rawDataSize=0]
OK
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1437530693126_0001, Tracking URL = http://master:8088/proxy/application_1437530693126_0001/
Kill Command = /home/hadoop