文章目录
- Hive的搭建安装
- Mysql安装
- 二、将其上传到linux下/opt/software中
- 三、检查当前系统是否安装过Mysql
- 四、 rpm -e --nodeps mariadb-libs //用此命令卸载mariadb
- 五、将其解压到/opt/module下
- 六、安装对应的rpm文件
- 七、切换到/etc
- 八、cat my.cnf
- 九、切换到/var/lib/mysql,并删除所有文件 rm -rf *
- 十、重置mysql: mysqld --initialize --user=mysql
- 十一、查看生成的随机密码:cat /var/log/mysqld.log
- 十、启动MySQL服务
- 11、修改mysql库下的user表中的root用户允许任意ip连接
- Hive安装
Hive的搭建安装
Mysql安装
https://dev.mysql.com/downloads/mysql/5.7.html#downloads
二、将其上传到linux下/opt/software中
三、检查当前系统是否安装过Mysql
rpm -qa|grep mariadb
mariadb-libs-5.5.56-2.el7.x86_64 //如果存在通过如下命令卸载
四、 rpm -e --nodeps mariadb-libs //用此命令卸载mariadb
五、将其解压到/opt/module下
命令为: tar -xf 要解压的文件 -C 要解压的位置
六、安装对应的rpm文件
如果报问题:
1.获取插件:yum install -y libaio
2.执行命令(按照顺序):
sudo rpm -ivh --nodeps mysql-community-common-5.7.36-1.el7.x86_64.rpm
sudo rpm -ivh --nodeps mysql-community-libs-5.7.36-1.el7.x86_64.rpm
sudo rpm -ivh --nodeps mysql-community-libs-compat-5.7.36-1.el7.x86_64.rpm
sudo rpm -ivh --nodeps mysql-community-client-5.7.36-1.el7.x86_64.rpm
sudo rpm -ivh --nodeps mysql-community-server-5.7.36-1.el7.x86_64.rpm
七、切换到/etc
八、cat my.cnf
九、切换到/var/lib/mysql,并删除所有文件 rm -rf *
十、重置mysql: mysqld --initialize --user=mysql
十一、查看生成的随机密码:cat /var/log/mysqld.log
十、启动MySQL服务
systemctl start mysqld
登录MySQL
mysql -uroot -p
Enter password: 输入临时生成的密码
登录成功.
必须先修改root用户的密码,否则执行其他的操作会报错
mysql> set password = password("新密码");
11、修改mysql库下的user表中的root用户允许任意ip连接
mysql> update mysql.user set host='%' where user='root';
mysql> flush privileges;
Hive安装
1、下载安装包:apache-hive-3.1.2-bin.tar.gz
上传至linux系统/opt/software/路径
2、解压软件
cd /opt/software/
tar -zxvf apache-hive-3.1.2-bin.tar.gz -C /opt/module/
3、修改系统环境变量
vim /etc/profile
添加内容:
export HIVE_HOME=/opt/module/apache-hive-3.1.2-bin
export PATH=$PATH:$HIVE_HOME/sbin:$HIVE_HOME/bin
重启环境配置:
source /etc/profile
4、修改hive环境变量
cd /opt/module/apache-hive-3.1.2-bin/bin/
编辑hive-config.sh文件
vi hive-config.sh
新增内容:
export JAVA_HOME=/opt/module/jdk1.8.0_212
export HIVE_HOME=/opt/module/apache-hive-3.1.2-bin
export HADOOP_HOME=/opt/module/hadoop-3.2.0
export HIVE_CONF_DIR=/opt/module/apache-hive-3.1.2-bin/conf
5、拷贝hive配置文件
cd /opt/module/apache-hive-3.1.2-bin/conf/
cp hive-default.xml.template hive-site.xml
6、修改Hive配置文件,找到对应的位置进行修改
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.cj.jdbc.Driver</value>
<description>Driver class name for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>root</value>
<description>Username to use against metastore database</description>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>123456</value>**需更改自己的密码**
<description>password to use against metastore database</description>
</property>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://192.168.1.100:3306/hive?useUnicode=true&characterEncoding=utf8&useSSL=false&serverTimezone=GMT</value>
<description>
JDBC connect string for a JDBC metastore.
To use SSL to encrypt/authenticate the connection, provide database-specific SSL flag in the connection URL.
For example, jdbc:postgresql://myhost/db?ssl=true for postgres database.
</description>
</property>
<property>
<name>datanucleus.schema.autoCreateAll</name>
<value>true</value>
<description>Auto creates necessary schema on a startup if one doesn't exist. Set this to false, after creating it once.To enable auto create also set hive.metastore.schema.verification=false. Auto creation is not recommended for production use cases, run schematool command instead.</description>
</property>
<property>
<name>hive.metastore.schema.verification</name>
<value>false</value>
<description>
Enforce metastore schema version consistency.
True: Verify that version information stored in is compatible with one from Hive jars. Also disable automatic
schema migration attempt. Users are required to manually migrate schema after Hive upgrade which ensures
proper metastore schema migration. (Default)
False: Warn if the version information stored in metastore doesn't match with one from in Hive jars.
</description>
</property>
<property>
<name>hive.exec.local.scratchdir</name>
<value>/opt/module/apache-hive-3.1.2-bin/tmp/${user.name}</value>
<description>Local scratch space for Hive jobs</description>
</property>
<property>
<name>system:java.io.tmpdir</name>
<value>/opt/module/apache-hive-3.1.2-bin/iotmp</value>
<description/>
</property>
<property>
<name>hive.downloaded.resources.dir</name>
<value>/opt/module/apache-hive-3.1.2-bin/tmp/${hive.session.id}_resources</value>
<description>Temporary local directory for added resources in the remote file system.</description>
</property>
<property>
<name>hive.querylog.location</name>
<value>/opt/module/apache-hive-3.1.2-bin/tmp/${system:user.name}</value>
<description>Location of Hive run time structured log file</description>
</property>
<property>
<name>hive.server2.logging.operation.log.location</name>
<value>/opt/module/apache-hive-3.1.2-bin/tmp/${system:user.name}/operation_logs</value>
<description>Top level directory where operation logs are stored if logging functionality is enabled</description>
</property>
<property>
<name>hive.metastore.db.type</name>
<value>mysql</value>
<description>
Expects one of [derby, oracle, mysql, mssql, postgres].
Type of database used by the metastore. Information schema & JDBCStorageHandler depend on it.
</description>
</property>
<property>
<name>hive.cli.print.current.db</name>
<value>true</value>
<description>Whether to include the current database in the Hive prompt.</description>
</property>
<property>
<name>hive.cli.print.header</name>
<value>true</value>
<description>Whether to print the names of the columns in query output.</description>
</property>
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/opt/hive/warehouse</value>
<description>location of default database for the warehouse</description>
</property>
7、上传mysql驱动包到/opt/module/apache-hive-3.1.2-bin/lib/文件夹下
驱动包:mysql-connector-java-8.0.15.zip,解压后从里面获取jar包
8、确保 mysql数据库中有名称为hive的数据库
9、初始化初始化元数据库
schematool -dbType mysql -initSchema
10、确保Hadoop启动
11、启动hive
hive
12、检测是否启动成功
show databases;