一,安装mysql:
1)yum(unbantu下命令:sudo apt-get ) install mysql-server
2}ind / -name "mysql" ;
3} mysql配置文件/etc/my.cnf中加入default-character-set=utf8
4) service mysqld start
5)mysqladmin -u root password root
6)
登录:
mysql -u root -p输入密码即可。
7)忘记密码:
service mysqld stop
mysqld_safe --user=root --skip-grant-tables
mysql -u root
use mysql
update user set password=password("new_pass") where user="root";
flush privileges;
8) Linux MySQL的几个重要目录
数据库目录
/var/lib/mysql/
配置文件
/usr/share /mysql(mysql.server命令及配置文件)
相关命令
/usr/bin(mysqladmin mysqldump等命令)
启动脚本
/etc/rc.d/init.d/(启动脚本文件mysql的目录)
二:安装hive
1),解压
2)配置变量
3)修改配置文件:
(1)修改 hive-env.sh
export HADOOP_HOME=/home/hadoop/hadoop-2.4.0
export HIVE_CONF_DIR=/home/hadoop/hive-0.13.1/conf
(2)修改 hive-log4j.properties
vi $HIVE_HOME/conf/hive-log4j.properties
hive.log.dir=/opt/hive-0.13.1/logs
(3)修改 hive-site.xml
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://hadoop:3306/hive?createDatabaseIfNotExist=true&characterEncoding=UTF-8</value>
<description>JDBC connect string for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
<description>Driver class name for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hive</value>
<description>username to use against metastore database</description>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>hive</value>
<description></description>
</property>
<property>
<name>hive.metastore.warehouse.dir</name>
<!-- base hdfs path -->
<value>/home/zfh/apache/hivewarehouse</value>
<description>location of default database for the warehouse</description>
</property>
</configuration>
注意:关键点在连接mysql的用户名和密码那:
CREATE USER 'hive'@'%' IDENTIFIED BY 'hive';
然后:mysql -u root -p
进入mysql后,use mysql,会发现好有好几个host,必须给他们一一授权,不然,mysql拒绝连接
-----------+------+
| host | user |
+-----------+------+
| % | hive |
| % | root |
| 127.0.0.1 | root |
| hadoop | hive |
| hadoop | root |
| localhost | hive |
| localhost | root |
grant all privileges on hive.* to hive@"hadoop" identified by "hive" with grant option;
grant all privileges on hive.* to hive@"localhost" identified by "hive" with grant option;
上面的操作完毕后:mysql -u hive(和配置文件一致) -p hive(和配置文件一致)能链接上就ok
然后首次启动要格式话hive
schematool -dbType mysql -initSchema
SparkSQL与Hive的整合
1. 拷贝$HIVE_HOME/conf/hive-site.xml和hive-log4j.properties到 $SPARK_HOME/conf/
2. 在$SPARK_HOME/conf/目录中,修改spark-env.sh,添加
export HIVE_HOME=/home/zfh/apache/hive-2.0.0
export SPARK_CLASSPATH=$HIVE_HOME/lib/mysql-connector-java-5.1.15-bin.jar:$SPARK_CLASSPATH
3. 另外也可以设置一下Spark的log4j配置文件,使得屏幕中不打印额外的INFO信息:
log4j.rootCategory=WARN, console
好了,SparkSQL与Hive的整合就这么简单,配置完后,重启Spark slave和master.
进入$SPARK_HOME/bin执行 ./spark-sql –name “lxw1234″ –master spark://hadoop:7077 进入spark-sql:
测试:/bin/run-example org.apache.spark.examples.sql.hive.HiveFromSpark --driver-class-path $SPARK_HOME/lib/mysql-connector-java-5.0.8-bin.jar