1.MySQL
参考https://blog.csdn.net/dukangming/article/details/104310752
1.下载
rpm -ivh mysql57-community-release-el7-10.noarch.rpm
一定要进入这个yum.repos.d目录
cd /etc/yum.repos.d/
yum install mysql-server
2.改密码
systemctl start mysqld
grep 'temporary password' /var/log/mysqld.log
mysql -u root -p
set global validate_password_policy=LOW;
set global validate_password_length=6;
ALTER USER 'root'@'localhost' IDENTIFIED BY '123456';
SHOW VARIABLES LIKE 'validate_password%';
3.别人也能访问
use mysql;
select host from user where user='root';
update user set host = '%' where user ='root';
flush privileges; 立刻刷新
1.hive
1.解压
tar -zxvf apache-hive-1.2.1-bin.tar.gz -C /opt/hive/
2.hive-env.sh文件
cp hive-env.sh.template hive-env.sh
vi hive-env.sh
export HADOOP_HOME=/opt/hadoop/hadoop-2.6.5(你HADOOP_HOME路径)
export HIVE_CONF_DIR=/opt/hive/apache-hive-1.2.1-bin/conf (你hive解压目录下的conf路径)
3.Hadoop集群配置
(1)必须启动hdfs和yarn
[atguigu@hadoop102 hadoop-2.7.2]$ sbin/start-dfs.sh
[atguigu@hadoop103 hadoop-2.7.2]$ sbin/start-yarn.sh
(2)在HDFS上创建/tmp和/user/hive/warehouse两个目录并修改他们的同组权限可写
[atguigu@hadoop102 hadoop-2.7.2]$ bin/hadoop fs -mkdir /tmp
[atguigu@hadoop102 hadoop-2.7.2]$ bin/hadoop fs -mkdir -p /user/hive/warehouse
[atguigu@hadoop102 hadoop-2.7.2]$ bin/hadoop fs -chmod g+w /tmp
[atguigu@hadoop102 hadoop-2.7.2]$ bin/hadoop fs -chmod g+w /user/hive/warehouse
4.Hive基本操作
(1)启动hive
[atguigu@hadoop102 hive]$ bin/hive
(2)查看数据库
hive> show databases;
hive> create database hive_db //创建数据库hive_db
(3)打开默认数据库
hive> use default;
(4)显示default数据库中的表
hive> show tables;
(5)创建一张表
hive> create table student(id int, name string);
(6)显示数据库中有几张表
hive> show tables;
(7)查看表的结构
hive> desc student;
(8)向表中插入数据
hive> insert into student values(1000,"ss");
(9)查询表中数据
hive> select * from student;
(10)退出hive
hive> quit;
说明:(查看hive在hdfs中的结构)
数据库:在hdfs中表现为${hive.metastore.warehouse.dir}目录下一个文件夹
表:在hdfs中表现所属db目录下一个文件夹,文件夹中存放该表中的具体数据
5.hive启动报错:Found class jline.Terminal, but interface was expected
条件:
hive1.2.1
hadoop2.6.5
原因:
hadoop目录下存在老版本jline:
/hadoop-2.6.5/share/hadoop/yarn/lib:
-rw-r--r-- 1 root root 87325 Mar 10 18:10 jline-0.9.94.jar
解决:
cd /opt/hive/apache-hive-1.2.1-bin/lib/
cp jline-2.12.jar /opt/hadoop/hadoop-2.6.5/share/hadoop/yarn/lib/
cd /opt/hadoop/hadoop-2.6.5/share/hadoop/yarn/lib/
rm -rf jline-0.9.94.jar
6.Hive元数据配置到MySql
(1)驱动拷贝
cp mysql-connector-java-5.1.27.jar /opt/hive/apache-hive-1.2.1-bin/lib/
(2)配置Metastore到MySql
A.在/opt/module/hive/conf目录下创建一个hive-site.xml
[atguigu@hadoop102 conf]$ touch hive-site.xml
[atguigu@hadoop102 conf]$ vi hive-site.xml
B.根据官方文档配置参数,拷贝数据到hive-site.xml文件中
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/user/hive/warehouse</value>
<description>location of default database for the warehouse</description>
</property>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://hadoop1:3306/metastore?createDatabaseIfNotExist=true</value>
<description>JDBC connect string for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
<description>Driver class name for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>root</value>
<description>username to use against metastore database</description>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>123456</value>
<description>password to use against metastore database</description>
</property>
<property>
<name>hive.cli.print.header</name>
<value>true</value>
</property>
<property>
<name>hive.cli.print.current.db</name>
<value>true</value>
</property>
-----hiveserver2配置
<property>
<name>hive.server2.thrift.port</name>
<value>10000</value>
</property>
<property>
<name>hive.server2.thrift.bind.host</name>
<value>192.168.58.111</value>
</property>
</configuration>
C.配置完毕后,如果启动hive异常,可以重新启动虚拟机。(重启后,别忘了启动hadoop集群)
mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| sys |
+--------------------+
4 rows in set (0.01 sec)
mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| metastore |
| mysql |
| performance_schema |
| sys |
+--------------------+
7.其他一些配置
2.9 Hive常见属性配置
2.9.1 Hive数据仓库位置配置
1)Default数据仓库的最原始位置是在hdfs上的:/user/hive/warehouse路径下。
2)在仓库目录下,没有对默认的数据库default创建文件夹。如果某张表属于default数据库,直接在数据仓库目录下创建一个文件夹。
3)修改default数据仓库原始位置(将hive-default.xml.template如下配置信息拷贝到hive-site.xml文件中)。
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/user/hive/warehouse</value>
<description>location of default database for the warehouse</description>
</property>
配置同组用户有执行权限
bin/hdfs dfs -chmod g+w /user/hive/warehouse
2.9.2 查询后信息显示配置
1)在hive-site.xml文件中添加如下配置信息,就可以实现显示当前数据库,以及查询表的头信息配置。
<property>
<name>hive.cli.print.header</name>
<value>true</value>
</property>
<property>
<name>hive.cli.print.current.db</name>
<value>true</value>
</property>
2.9.3 Hive运行日志信息配置
1.Hive的log默认存放在/tmp/atguigu/hive.log目录下(当前用户名下)
2.修改hive的log存放日志到/opt/module/hive/logs
(1)修改/opt/module/hive/conf/hive-log4j.properties.template文件名称为
hive-log4j.properties
[atguigu@hadoop102 conf]$ pwd
/opt/module/hive/conf
[atguigu@hadoop102 conf]$ mv hive-log4j.properties.template hive-log4j.properties
(2)在hive-log4j.properties文件中修改log存放位置
hive.log.dir=/opt/module/hive/logs
2.9.4 参数配置方式
1.查看当前所有的配置信息
hive>set;
2.参数的配置三种方式
(1)配置文件方式
默认配置文件:hive-default.xml
用户自定义配置文件:hive-site.xml
注意:用户自定义配置会覆盖默认配置。另外,Hive也会读入Hadoop的配置,因为Hive是作为Hadoop的客户端启动的,Hive的配置会覆盖Hadoop的配置。配置文件的设定对本机启动的所有Hive进程都有效。
(2)命令行参数方式
启动Hive时,可以在命令行添加-hiveconf param=value来设定参数。
例如:
[atguigu@hadoop103 hive]$ bin/hive -hiveconf mapred.reduce.tasks=10;
注意:仅对本次hive启动有效
查看参数设置:
hive (default)> set mapred.reduce.tasks;
(3)参数声明方式
可以在HQL中使用SET关键字设定参数
例如:
hive (default)> set mapred.reduce.tasks=100;
注意:仅对本次hive启动有效。
查看参数设置
hive (default)> set mapred.reduce.tasks;
上述三种设定方式的优先级依次递增。即配置文件<命令行参数<参数声明。注意某些系统级的参数,例如log4j相关的设定,必须用前两种方式设定,因为那些参数的读取在会话建立以前已经完成了。
#########################################2.6 HiveJDBC访问###########################################
2.6.1 启动hiveserver2服务
[atguigu@hadoop102 hive]$ bin/hiveserver2
2.6.2 启动beeline
[atguigu@hadoop102 hive]$ bin/beeline
Beeline version 1.2.1 by Apache Hive
beeline>
2.6.3 连接hiveserver2
beeline> !connect jdbc:hive2://hadoop102:10000(回车)
Connecting to jdbc:hive2://hadoop102:10000
Enter username for jdbc:hive2://hadoop102:10000: atguigu(回车)
Enter password for jdbc:hive2://hadoop102:10000: (直接回车)
Connected to: Apache Hive (version 1.2.1)
Driver: Hive JDBC (version 1.2.1)
Transaction isolation: TRANSACTION_REPEATABLE_READ
0: jdbc:hive2://hadoop102:10000> show databases;
+----------------+--+
| database_name |
+----------------+--+
| default |
| hive_db2 |
+----------------+--+
3.Spqrk
1.安装scala
tar -xvf scala-2.11.8.tgz -C /opt/scala/
环境变量
export SCALA_HOME=/opt/scala/scala-2.11.8
export PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$SCALA_HOME/bin
source /etc/profile
scp -r /opt/scala/ root@hadoop2:/opt/
scp -r /etc/profile root@hadoop2:/etc
2.解压
tar -xvf spark-2.2.0-bin-hadoop2.6.tgz -C /opt/spark/
3.slaves
cp slaves.template slaves
hadoop2
4.spark-env.sh
cp spark-env.sh.template spark-env.sh
export SPARK_PID_DIR=/opt/spark/pid/
export JAVA_HOME=/opt/java/jdk1.8.0_211
export HADOOP_HOME=/opt/hadoop/hadoop-2.6.5
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export SPARK_MASTER_HOST=hadoop1
export SPARK_MASTER_PORT=7077
以下没有用到,用作补充
export SPARK_DIST_CLASSPATH=$(hadoop classpath)
SPARK_MASTER_HOST=standalone.com #绑定master的主机域名
SPARK_MASTER_PORT=7077 # master 通信端口,worker和master通信端口
SPARK_MASTER_WEBUI_PORT=8080 # master SParkUI用的端口
SPARK_WORKER_MEMORY=1g #配置worker的内存大小
5.环境变量
vi /etc/profile
export JAVA_HOME=/opt/java/jdk1.8.0_211
export JRE_HOME=/opt/java/jdk1.8.0_211/jre
export HADOOP_HOME=/opt/hadoop/hadoop-2.6.5
export SCALA_HOME=/opt/scala/scala-2.11.8
export SPARK_HOME=/opt/spark/spark-2.2.0-bin-hadoop2.6
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib
export PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$SCALA_HOME/bin:${SPARK_HOME}/bin
记得source /etc/profile
6.分发
scp -r /opt/spark/ root@hadoop2:/opt/
scp -r /etc/profile root@hadoop2:/etc
7.测试
./start-all.sh是开启standalone,如果用local或者yarn不需要开启
yarn测试参考官网http://spark.apache.org/docs/latest/running-on-yarn.html#launching-spark-on-yarn
./bin/spark-submit --class org.apache.spark.examples.SparkPi \
--master yarn \
--deploy-mode cluster \
--driver-memory 4g \
--executor-memory 2g \ //指定当前机器上的每个Worker进程允许分配的逻辑内存大小
--executor-cores 1 \ //指定当前机器上的每个Worker进程允许分配的逻辑CPU核数
--queue default \
examples/jars/spark-examples*.jar \
10
<property>
<name>hive.metastore.uris</name>
<value>thrift:// master:9083</value>
</property>
4.spark整合hive (从此开始hive-site.xml在hive和spark要同步,如果不需要standalone模式,hd2的不用再变动)
1.别的教程都说需要编译,不知道为什么我也没编译就能用
2.根据hive的配置参数hive.metastore.uris的情况,采用不同的集成方式
分别为(区别):
-1. hive.metastore.uris没有给定配置值,为空(默认情况)
SparkSQL通过hive配置的javax.jdo.option.XXX相关配置值直接连接metastore数据库直接获取hive表元数据
--1.1 需要将连接数据库的驱动添加到Spark应用的classpath中
-2. hive.metastore.uris给定了具体的参数值
SparkSQL通过连接hive提供的metastore服务来获取hive表的元数据
--2.1 直接启动hive的metastore服务即可完成SparkSQL和Hive的集成
$ hive --service metastore &
必须开metastore了,不开hive都不能启动了
我用的-2
修改hive-site.xml 在最后添加
<name>hive.metastore.warehouse.dir</name>
<value>/user/hive/warehouse</value>
<description>location of default database for the warehouse</description>
</property>
<property>
<name>hive.metastore.uris</name>
<value>thrift://hadoop1:9083</value>
</property>
整个文档
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://hadoop1:3306/metastore?createDatabaseIfNotExist=true</value>
<description>JDBC connect string for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
<description>Driver class name for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>root</value>
<description>username to use against metastore database</description>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>123456</value>
<description>password to use against metastore database</description>
</property>
<property>
<name>hive.cli.print.header</name>
<value>true</value>
</property>
<property>
<name>hive.cli.print.current.db</name>
<value>true</value>
</property>
<property>
<name>hive.server2.thrift.port</name>
<value>10000</value>
</property>
<property>
<name>hive.server2.thrift.bind.host</name>
<value>192.168.58.111</value>
</property>
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/user/hive/warehouse</value>
<description>location of default database for the warehouse</description>
</property>
<property>
<name>hive.metastore.uris</name>
<value>thrift://hadoop1:9083</value>
</property>
</configuration>
3.复制到spark
cp hive-site.xml /opt/spark/spark-2.2.0-bin-hadoop2.6/conf
4.测试
./hive --service metastore &
./spark-sql
ps -ef|grep hive
kill -9 9267
5.SparkSQL的thriftserver服务(hive和spark都改)
hive-site.xml
hive.server2.thrift.port=10000 --监听的端口号
hive.server2.thrift.bind.host=bigdata.ibeifeng.com --监听的主机名
<property>
<name>hive.server2.thrift.port</name>
<value>10000</value>
</property>
<property>
<name>hive.server2.thrift.bind.host</name>
<value>hadoop1</value>
</property>
./hive --service metastore
./start-thriftserver.sh
jps -ml | grep HiveThriftServer2
./bin/beeline
之后同hiveserver