Hadoop3.3安装配置

1. macOS安装

brew install hadoop
brew install hive

Hadoop3.3安装

2. 参数配置

参考地址:Hive安装配置
(1)环境变量:

## java
export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_321.jdk/Contents/Home
export JRE_HOME=${JAVA_HOME}/jre
export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib
export PATH=${JAVA_HOME}/bin:$PATH

## go
export GOROOT=/usr/local/Cellar/go/1.18
export GOPATH=/Users/apple/workspace/GoProjects
export GOBIN=$GOPATH/bin
export GO111MODULE=on
export GOPROXY=https://goproxy.cn,https://goproxy.io,direct
export GOSUMDB=sum.golang.google.cn

## hadoop
export HADOOP_HOME=/usr/local/Cellar/hadoop/3.3.2/libexec
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_CLASSPATH=`hadoop classpath`
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="$HADOOP_OPTS -Djava.library.path=$HADOOP_HOME/lib/native/"

## hive
export HIVE_HOME=/usr/local/Cellar/hive/3.1.3/libexec
export HIVE_CONF_DIR=$HIVE_HOME/conf

(2)hadoop-env.sh

#############################################################
export JAVA_HOME=/usr/local/Cellar/openjdk@8/1.8.0+322
export HADOOP_CONF_DIR=/usr/local/Cellar/hadoop/3.3.2/libexec/etc/hadoop

(3)core-site.xml

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>
	<!-- 指定HDFS中NameNode的地址 -->
	<property>
		<name>fs.defaultFS</name>
		<value>hdfs://192.168.31.120:9000</value>
	</property>

	<!-- 指定Hadoop运行时产生的文件的存储目录 -->
	<property>
		<name>hadoop.tmp.dir</name>
		<value>/usr/local/Cellar/hadoop/3.3.2/data/tmp</value>
	</property>

</configuration>

(4)hdfs-site.xml

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>
	<!-- 指定HDFS副本上数量 -->
	<property>
		<name>dfs.replication</name>
		<value>1</value>
	</property>

	<!-- 可通过关闭权限检 -->
	<property>
        <name>dfs.permissions.enabled</name>
        <value>false</value>
	</property>
</configuration>

3. Hive配置

(1)hive-env.sh

#hadoop
HADOOP_HOME=/usr/local/Cellar/hadoop/3.3.2/libexec
#hive conf
export HIVE_CONF_DIR=/usr/local/Cellar/hive/3.1.3/libexec/conf
#hive lib
export HIVE_AUX_JARS_PATH=/usr/local/Cellar/hive/3.1.3/libexec/lib

(2)hive-site.xml

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
	<!-- jdbc连接的URL -->
    <property>
        <name>javax.jdo.option.ConnectionURL</name>
        <value>jdbc:mysql://localhost:3306/metastore?createDatabaseIfNotExist=true&amp;useUnicode=true&amp;characterEncoding=UTF-8&amp;useSSL=false&amp;serverTimezone=GMT</value>
	</property>

    <!-- jdbc连接的Driver-->
    <property>
        <name>javax.jdo.option.ConnectionDriverName</name>
        <value>com.mysql.cj.jdbc.Driver</value>
	</property>

    <!-- jdbc连接的username-->
    <property>
        <name>javax.jdo.option.ConnectionUserName</name>
        <value>root</value>
    </property>

    <!-- jdbc连接的password -->
    <property>
        <name>javax.jdo.option.ConnectionPassword</name>
        <value>123456aA</value>
    </property>

    <property>
        <name>system:java.io.tmpdir</name>
        <value>/usr/local/Cellar/hive/3.1.3/data/tmp</value>
    </property>
    
   <property>
        <name>hive.cli.print.header</name>
        <value>true</value>
    </property>

    <property>
        <name>hive.cli.print.current.db</name>
        <value>true</value>
    </property>
</configuration>

(3)将MySQL的JDBC驱动拷贝到Hive的lib目录下:
wget https://cdn.mysql.com//Downloads/Connector-J/mysql-connector-java-8.0.27.tar.gz

(4)创建文件夹

##创建文件夹
hdfs dfs -mkdir -p /user/hive/warehouse
hadoop fs -mkdir -p /user/hive/tmp
hadoop fs -mkdir -p /user/hive/log

## 修改文件夹权限
hadoop fs -chmod -R 777 /user/hive/warehouse
hadoop fs -chmod -R 777 /user/hive/tmp
hadoop fs -chmod -R 777 /user/hive/log

(5)关闭安全模式
hdfs dfsadmin -safemode leave

(6)初始化元数据
schematool -dbType mysql -initSchema

(7).服务端后台开启metastore
nohup hive --service metastore &

(8)启动hiveserver2节点
nohup bin/hiveserver2 &

(9)连接

beeline> !connect jdbc:hive2://hadoop1:10000
#默认端口号是10000
Connecting to jdbc:hive2://hadoop1:10000
#输入账号密码
Enter username for jdbc:hive2://hadoop1:10000: root
Enter password for jdbc:hive2://hadoop1:10000: ******

(10)
hdfs: http://localhost:9870/
hive: http://localhost:10002/

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

phial03

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值