Ranger安装

1.安装maven 3.3.9
1)将下载的安装包解压到/usr/local/文件夹下:

tar -xvf apache-maven-3.3.9-bin.tar.gz

2)在hadoop的home下的.bashrc末尾添加:

export M2_HOME=/usr/local/apache-maven-3.3..9
export M2=$M2_HOME/bin
export PATH=$M2:$PATH

执行source .bashrc使得此文件生效。
3) mvn -version 检查maven是否安装成功

2.安装git

sudo yum install git

3.安装gcc [可选的]
[This is optional and required if you are using your Linux /etc/passwd to authenticate to login into Ranger Admin.
It is not required if you are using RangerAdmin local user/password or LDAP for authentication.]

sudo yum install gcc

4.Ranger Admin安装
1)克隆源码,我这样克隆下来的版本是0.5.4:

mkdir ~/dev
cd ~/dev
git clone https://github.com/apache/incubator-ranger.git
cd incubator-ranger
git checkout ranger-0.5

2)编译源码

cd ~/dev/incubator-ranger
export MAVEN_OPTS="-Xmx512M"
export JAVA_HOME=/usr/local/java/jdk1.7.0_51
export PATH=$JAVA_HOME/bin:$PATH
mvn clean compile package assembly:assembly install 

此处编译需要很久,大概两个小时吧,最初编译失败,可后来修改maven的settings.xml,在编译就成功了:

 <mirror>
       <id>central_mirror</id>
       <url>http://repo1.maven.org/maven2/</url>
       <mirrorOf>central</mirrorOf>
</mirror>
[hadoop@master target]$ ls
antrun                                     ranger-0.5.4-SNAPSHOT-migration-util.tar.gz
archive-tmp                                ranger-0.5.4-SNAPSHOT-migration-util.zip
maven-shared-archive-resources             ranger-0.5.4-SNAPSHOT-ranger-tools.tar.gz
ranger-0.5.4-SNAPSHOT-admin.tar.gz         ranger-0.5.4-SNAPSHOT-ranger-tools.zip
ranger-0.5.4-SNAPSHOT-admin.zip            ranger-0.5.4-SNAPSHOT-solr-plugin.tar.gz
ranger-0.5.4-SNAPSHOT-hbase-plugin.tar.gz  ranger-0.5.4-SNAPSHOT-solr-plugin.zip
ranger-0.5.4-SNAPSHOT-hbase-plugin.zip     ranger-0.5.4-SNAPSHOT-src.tar.gz
ranger-0.5.4-SNAPSHOT-hdfs-plugin.tar.gz   ranger-0.5.4-SNAPSHOT-src.zip
ranger-0.5.4-SNAPSHOT-hdfs-plugin.zip      ranger-0.5.4-SNAPSHOT-storm-plugin.tar.gz
ranger-0.5.4-SNAPSHOT-hive-plugin.tar.gz   ranger-0.5.4-SNAPSHOT-storm-plugin.zip
ranger-0.5.4-SNAPSHOT-hive-plugin.zip      ranger-0.5.4-SNAPSHOT-usersync.tar.gz
ranger-0.5.4-SNAPSHOT-kafka-plugin.tar.gz  ranger-0.5.4-SNAPSHOT-usersync.zip
ranger-0.5.4-SNAPSHOT-kafka-plugin.zip     ranger-0.5.4-SNAPSHOT-yarn-plugin.tar.gz
ranger-0.5.4-SNAPSHOT-kms.tar.gz           ranger-0.5.4-SNAPSHOT-yarn-plugin.zip
ranger-0.5.4-SNAPSHOT-kms.zip              rat.txt
ranger-0.5.4-SNAPSHOT-knox-plugin.tar.gz   version
ranger-0.5.4-SNAPSHOT-knox-plugin.zip

5.Ranger Policy Admin安装

cd /usr/local
[hadoop@master local]$ sudo tar zxvf ~/dev/incubator-ranger/target/ranger-0.5.4-SNAPSHOT-admin.tar.gz
sudo ln -s ranger-0.5.4-SNAPSHOT-admin ranger-admin
cd ranger-admin/

1)Ranger Admin安装配置
a)打开Ranger Admin文件夹下install.properties文件,修改参数:

db_root_user=root
db_root_password=root
db_host=localhost
db_name=ranger
db_user=rangeradmin
db_password=rangeradmin
audit_db_name=ranger
audit_db_user=rangerlogger
audit_db_password=rangerlogger

b)安装Ranger Admin
./setup.sh
c)启动Ranger Admin服务
ranger-admin start (停止:ranger-admin stop ,重启:ranger-admin restart)
d)验证Ranger Admin服务,在浏览器中打开localhost:6080,如果出现Ranger的登录界面,说明安装成功了.登录的用户名/密码:admin/admin

6.Ranger-usersync安装配置

cd /usr/local
sudo tar-zxvf ~/dev/incubator-ranger/target/ranger-0.5.4-SNAPSHOT-usersync.tar.gz
sudo ln -s ranger-0.5.4-SNAPSHOT-usersync/ ranger-usersync
sudo mkdir -p /var/log/ranger-usersync
sudo chown ranger /var/log/ranger-usersync
sudo chgrp ranger /var/log/ranger-usersync

a) 进入ranger-usersync文件夹,修改install.properties,填写相关信息

POLICY_MGR_URL = http://localhost:6080
SYNC_SOURCE = unix
logdir = /var/log/ranger/usersync

b)安装usersync Plugin
./setup.sh
c)启用usersync Plugin插件
./ranger-usersync-services.sh start

7.HDFS-Plugin安装配置
因为我已经安装了hadoop,安装在/home/hadoop/hadoop-2.7.2
创建一个hdfs用户 :sudo useradd hdfs

cd /usr/local
sudo tar zxvf ~/dev/incubator-ranger/target/ranger-0.5.4-SNAPSHOT-hdfs-plugin.tar.gz
sudo ln -s ranger-0.5.4-SNAPSHOT-hdfs-plugin/ ranger-hdfs-plugin
cd ranger-hdfs-plugin/

a)修改install.properties,如下:
PROPERTY VALUE
POLICY_MGR_URL http://localhost:6080
REPOSITORY_NAME hadoopdev
XAAUDIT.DB.IS_ENABLED true
XAAUDIT.DB.FLAVOUR MYSQL
XAAUDIT.DB.HOSTNAME localhost
XAAUDIT.DB.DATABASE_NAME ranger_audit
XAAUDIT.DB.USER_NAME rangerlogger
XAAUDIT.DB.PASSWORD rangerlogger
b) 在/usr/local/文件夹下创建hadoop文件夹

sudo mkdir hadoop
ln -s /home/hadoop/hadoop-2.7.2/etc/hadoop/ /usr/local/hadoop/conf
echo "export HADOOP_HOME=/home/hadoop/hadoop-2.7.2" >> /etc/bashrc

启用HDFS Plugin插件:[root@master ranger-hdfs-plugin]# ./enable-hdfs-plugin.sh

[root@master ranger-hdfs-plugin]# ./enable-hdfs-plugin.sh
Custom user and group is available, using custom user and group.
ERROR: Unable to find the lib directory of component [hadoop]; dir [/usr/local/hadoop/lib] not found. Exiting installation. 需要将HDFS Plugin内的jar和HADOOP包含的HDFS jar都指向/usr/local/hadoop/lib:

[root@master ranger-hdfs-plugin]# cp ./lib/ranger-hdfs-plugin-impl/*.jar /home/hadoop/hadoop-2.7.2/share/hadoop/hdfs/lib/
ln -s /home/hadoop/hadoop-2.7.2/share/hadoop/hdfs/lib/ /usr/local/hadoop/lib

接下来启用HDFS Plugin插件[root@master ranger-hdfs-plugin]# ./enable-hdfs-plugin.sh,成功。

c) 重启Hadoop文件系统
d) 验证HDFS Plugin服务,这时登录Ranger的管理员界面验证下HDFS plugin是否加载成功,发现并没有.
原因:安装HDFS plugin时install.properties文件里定义的REPOSITORY_NAME(值为hadoopdev)并未通过Ranger Admin在HDFS插件里的服务管理里注册成服务(名hadoopdev).
解决:1 登录Ranger Adming
2 点击HDFS plugin的添加按钮
3 定义服务名为hadoopdev,用户名:admin,密码:damin,namenode_url:hdfs://192.168.23.139:9000 【我的namenode的ip地址为192.168.23.139,端口为9000】
注意:如果没有安装和开启Ranger-usersync服务的情况下直接测试HDFS赋权权限是不成功的.

——————————————————————————————————————————————-

7.Hive-Plugin安装配置
我已经安装了che-hive-2.1.1-bin,在/home/hadoop目录下

cd /usr/local
[hadoop@master local]$ sudo ln -s ~/apache-hive-2.1.1-bin hive
[hadoop@master hive]$ sudo useradd hive
[hadoop@master local]$ cd hive
#Export HIVE_HOME to bashrc
[root@master hive]# echo "export HIVE_HOME=/usr/local/hive" >> /etc/bashrc

并将HADOOP_VERSION配置在/etc/bashrc中,在此文件中加入HADOOP_VERSION=”2.7.2” 【没有配置HADOOP_VERSION,会导致hiveserver2无法启动】

cd /usr/local
sudo tar zxvf ~/dev/incubator-ranger/target/ranger-0.5.4-SNAPSHOT-hive-plugin.tar.gz
sudo ln -s ranger-0.5.4-SNAPSHOT-hive-plugin/ ranger-hive-plugin
cd ranger-hive-plugin

a)修改install.properties,如下:
ROPERTY VALUE
POLICY_MGR_URL http://localhost:6080
REPOSITORY_NAME hivedev
XAAUDIT.DB.DATABASE_NAME ranger_audit
XAAUDIT.DB.FLAVOUR=MYSQL MYSQL
XAAUDIT.DB.HOSTNAME localhost
XAAUDIT.DB.IS_ENABLED true
XAAUDIT.DB.PASSWORD rangerlogger
XAAUDIT.DB.USER_NAME rangerlogger
b)./enable–hive–plugin.sh
c)

[root@master ranger-admin]# chown -R hive:hive /var/log/hive
[root@master ranger-admin]# chown -R hadoop:hadoop /home/hadoop/apache-hive-2.1.1-bin/conf/hiveserver2-site.xml
[root@master ranger-admin]# chown -R hive:hadoop /home/hadoop/apache-hive-2.1.1-bin/conf/hive-log4j2.properties
[root@master ranger-admin]# chown -R hive:hadoop /home/hadoop/apache-hive-2.1.1-bin/conf/hive-site.xml

启动hive metastore服务:

[hadoop@master conf]$ hive --service metastore &

[hadoop@master /]$ sudo hive --service hiveserver2 & 【在/tmp/root/hive.log可查看到hiveserver2的日志】
使用beeline测试连接metastore:
[hadoop@master /]$ /usr/local/hive/bin/beeline -u “jdbc:hive2://localhost:10000” -n hadoop [后面的hadoop与coresite中配置的代理hadoop一致,否则会报错]
which: no hbase in (/usr/local/apache-maven-3.3.9/bin:/usr/local/java/jdk1.7.0_51/bin:/usr/local/apache-maven-3.3.9/bin:/usr/local/java/jdk1.7.0_51/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin:/bin:/sbin:/home/hadoop/hadoop-2.7.2/sbin:/home/hadoop/hadoop-2.7.2/bin:/home/hadoop/apache-hive-2.1.1-bin/bin:/home/hadoop/.local/bin:/home/hadoop/bin:/home/hadoop/hadoop-2.7.2/sbin:/home/hadoop/hadoop-2.7.2/bin:/home/hadoop/apache-hive-2.1.1-bin/bin)
Connecting to jdbc:hive2://localhost:10000
Connected to: Apache Hive (version 2.1.1)
Driver: Hive JDBC (version 2.1.1)
17/02/25 15:01:04 [main]: WARN jdbc.HiveConnection: Request to set autoCommit to false; Hive does not support autoCommit=false.
Transaction isolation: TRANSACTION_REPEATABLE_READ
Beeline version 2.1.1 by Apache Hive
0: jdbc:hive2://localhost:10000>
说明hiveserver2成功启动,连接metastore成功
—————————————————————————————————————————————
此时输入如下命令,报错,说没有权限,因为之前没有配置
0: jdbc:hive2://localhost:10000> show databases;
Error: Error while compiling statement: FAILED: HiveAccessControlException Permission denied: user [hadoop] does not have [USE] privilege on [null] (state=42000,code=40000)
0: jdbc:hive2://localhost:10000> use default;
Error: Error while compiling statement: FAILED: HiveAccessControlException Permission denied: user [hadoop] does not have [USE] privilege on [default] (state=42000,code=40000)
0: jdbc:hive2://localhost:10000>
解决:在ranger admin页面上配置hive的policy,为hadoop用户分配权限。在输入此命令,发现显示出了hive中的所有数据库。


遇到的bug:hadoop is not allowed to impersonate hive  参考网页:http://blog.csdn.net/swing2008/article/details/53230145
在hadoop配置文件core-site.xml中添加如下配置:
<property>
        <name>hadoop.proxyuser.hadoop.hosts</name>
        <value>*</value>
</property>
<property>
        <name>hadoop.proxyuser.hadoop.groups</name>
        <value>*</value>
</property> 

d)登录Ranger Adming、点击hive plugin的添加按钮,定义服务名为hivedev、jdbc.driverClassName为org.apache.hive.jdbc.HiveDriver、jdbc.url为jdbc:hive2://localhost:10000我填的usernameh和password都是admin,测试连接,发现连接成功,点击添加按钮,然后在Audit->plugins发现已经有hivedev的记录了,hive plugin安装成功

参考:
https://cwiki.apache.org/confluence/display/RANGER/Apache+Ranger+0.5.0+Installation

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值