ubuntu mysql hive_Ubuntu系统下安装并配置hive-2.1.0

本文介绍了如何在Ubuntu系统中安装并配置hive-2.1.0,包括安装mysql-server和mysql-client,启动MySQL服务,创建数据库和用户,以及配置hive的环境和连接细节。
摘要由CSDN通过智能技术生成

一、mysql-server和mysql-client的下载

root@SparkSingleNode:/usr/local#  sudo apt-get install mysql-server  mysql-client (Ubuntu版本)

20180110214942452094.png

我这里,root密码,为rootroot。

二、启动MySQL服务

20180110214942454047.png

root@SparkSingleNode:/usr/local# sudo /etc/init.d/mysql start      (Ubuntu版本)

* Starting MySQL database server mysqld [ OK ]

root@SparkSingleNode:/usr/local#

三、进入mysql服务

Ubuntu里 的mysql里有个好处,直接自己对root@下的所有,自己默认设置好了

20180110214942456001.png

20180110214942456977.png

root@SparkSingleNode:/usr/local# mysql -uroot -p

Enter password:   //输入rootroot

Welcome to the MySQL monitor. Commands end with ; or \g.

Your MySQL connection id is 43

Server version: 5.5.53-0ubuntu0.14.04.1 (Ubuntu)

Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its

affiliates. Other names may be trademarks of their respective

owners.

Type ‘help;‘ or ‘\h‘ for help. Type ‘\c‘ to clear the current input statement.

mysql> create database if not exists hive_metadata;

Query OK, 1 row affected (0.00 sec)

mysql>grant all privileges on hive_metadata.* to ‘hive‘@‘%‘ identified by ‘hive‘;

Query OK, 0 rows affected (0.00 sec)

mysql>grant all privileges on hive_metadata.* to ‘hive‘@‘localhost‘ identified by ‘hive‘;

Query OK, 0 rows affected (0.00 sec)

mysql> grant all privileges on hive_metadata.* to ‘hive‘@‘SparkSingleNode‘ identified by ‘hive‘;        //注意,SparkSingleNode是我的主机名,别乱复制

Query OK, 0 rows affected (0.00 sec)

mysql> flush privileges;

Query OK, 0 rows affected (0.00 sec)

mysql> use hive_metadata;

Database changed

mysql> select user,host,password from mysql.user;

+------------------+-----------------+-------------------------------------------+

| user | host | password |

+------------------+-----------------+-------------------------------------------+

| root | localhost | *6C362347EBEAA7DF44F6D34884615A35095E80EB |

| root | sparksinglenode | *6C362347EBEAA7DF44F6D34884615A35095E80EB |

| root | 127.0.0.1 | *6C362347EBEAA7DF44F6D34884615A35095E80EB |

| root | ::1 | *6C362347EBEAA7DF44F6D34884615A35095E80EB |

| debian-sys-maint | localhost | *5DD77395EB71A702D01A6B0FADD8F2C0C88830C5 |

| hive | % | *4DF1D66463C18D44E3B001A8FB1BBFBEA13E27FC |

| hive | localhost | *4DF1D66463C18D44E3B001A8FB1BBFBEA13E27FC |

| hive | sparksinglenode | *4DF1D66463C18D44E3B001A8FB1BBFBEA13E27FC |

+------------------+-----------------+-------------------------------------------+

8 rows in set (0.00 sec)

mysql> exit;

Bye

root@SparkSingleNode:/usr/local#

四、安装hive

这里,很简单,不多赘述。

20180110214942459907.png

spark@SparkSingleNode:/usr/local/hive$ ll

total 12

drwxr-xr-x 3 spark spark 4096 11月 21 10:39 ./

drwxr-xr-x 15 root root 4096 11月 21 10:25 ../

drwxrwxr-x 9 spark spark 4096 11月 21 10:38 apache-hive-2.1.0-bin/

spark@SparkSingleNode:/usr/local/hive$ mv apache-hive-2.1.0-bin hive-2.1.0

spark@SparkSingleNode:/usr/local/hive$ ls

hive-2.1.0

spark@SparkSingleNode:/usr/local/hive$ cd hive-2.1.0/

spark@SparkSingleNode:/usr/local/hive/hive-2.1.0$ ls

bin examples jdbc LICENSE README.txt scripts

conf hcatalog lib NOTICE RELEASE_NOTES.txt

spark@SparkSingleNode:/usr/local/hive/hive-2.1.0$ cd conf/

spark@SparkSingleNode:/usr/local/hive/hive-2.1.0/conf$ ls

beeline-log4j2.properties.template ivysettings.xml

hive-default.xml.template llap-cli-log4j2.properties.template

hive-env.sh.template llap-daemon-log4j2.properties.template

hive-exec-log4j2.properties.template parquet-logging.properties

hive-log4j2.properties.template

spark@SparkSingleNode:/usr/local/hive/hive-2.1.0/conf$ cp hive-default.xml.template hive-site.xml

spark@SparkSingleNode:/usr/local/hive/hive-2.1.0/conf$

五、配置hive

20180110214942461860.png

javax.jdo.option.ConnectionURL

jdbc:mysql://SparkSingleNode:3306/hive_metadata?createDatabaseIfNotExist=true

JDBC connect string for a JDBC metastore.

To use SSL to encrypt/authenticate the connection, provide database-specific SSL flag in the connection URL.

For example, jdbc:postgresql://myhost/db?ssl=true for postgres database.

20180110214942462837.png

javax.jdo.option.ConnectionDriverName

com.mysql.jdbc.Driver

Driver class name for a JDBC metastore

20180110214942464790.png

javax.jdo.option.ConnectionUserName

hive

Username to use against metastore database

20180110214942465767.png

javax.jdo.option.ConnectionPassword

hive

password to use against metastore database

spark@SparkSingleNode:/usr/local/hive/hive-2.1.0/conf$ cp hive-env.sh.template hive-env.sh

spark@SparkSingleNode:/usr/local/hive/hive-2.1.0/conf$ vim hive-env.sh

20180110214942467720.png

spark@SparkSingleNode:/usr/local/hive/hive-2.1.0/bin$ vim hive-config.sh

20180110214942468696.png

export JAVA_HOME=/usr/local/jdk/jdk1.8.0_60

export HIVE_HOME=/usr/local/hive/hive-2.1.0

export HADOOP_HOME=/usr/local/hadoop/hadoop-2.6.0

vim /etc/profile

#hive

export HIVE_HOME=/usr/local/hive/hive-2.1.0

export PATH=$PATH:$HIVE_HOME/bin

source /etc/profile

将mysql-connector-java-***.jar,复制到hive安装目录下的lib下。

20180110214942470650.png

20180110214942472603.png

spark@SparkSingleNode:/usr/local/hadoop/hadoop-2.6.0$ sbin/start-all.sh

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值