atlas单机安装

一、虚拟机准备

  1. 更新虚拟机,命令:yum -y update
  2. 修改hostname,命令:hostnamectl set-hostname atlas
  3. 关闭防火墙,命令:systemctl stop firewalld.service和systemctl disable firewalld.service
  4. reboot

二、安装jdk

  1. 卸载openjdk,命令:
rpm -e --nodeps java-1.7.0-openjdk
rpm -e --nodeps java-1.7.0-openjdk-headless
rpm -e --nodeps java-1.8.0-openjdk
rpm -e --nodeps java-1.8.0-openjdk-headless
  1. 解压jdk,命令:
tar -xzvf jdk-8u161-linux-x64.tar.gz -C /home/atlas/
mv jdk1.8.0_161/ jdk1.8
  1. 配置环境变量,命令
vim /etc/profile.d/my_env.sh

export JAVA_HOME=/home/atlas/jdk1.8
export PATH=$PATH:$JAVA_HOME/bin

source /etc/profile

三、配置免密登录

  1. 生成密钥,命令:ssh-keygen -t rsa
  2. 进入.ssh目录,命令:cd /root/.ssh
  3. 配置免密,命令:
cat id_rsa.pub >> authorized_keys
chmod 600 ./authorized_keys

四、配置handoop2.7.2

  1. 解压安装包,命令:
tar -xzvf hadoop-2.7.2.tar.gz -C /home/atlas/
mv hadoop-2.7.2/ hadoop
  1. 配置环境变量
vim /etc/profile.d/my_env.sh

export HADOOP_HOME=/home/atlas/hadoop
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin

source /etc/profile
  1. 修改core-site.xml,命令:
vim /home/atlas/hadoop/etc/hadoop/core-site.xml


<configuration>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>file:/home/atlas/hadoop/tmp</value>
    </property>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://localhost:9000</value>
    </property>
    <property>
		<name>hadoop.http.staticuser.user</name>
		<value>atguigu</value>
	</property>
	<property>
		<name>hadoop.proxyuser.atguigu.hosts</name>
		<value>*</value>
	</property>
	<property>
		<name>hadoop.proxyuser.atguigu.groups</name>
		<value>*</value>
	</property>
	<property>
		<name>hadoop.proxyuser.atguigu.groups</name>
		<value>*</value>
	</property>
</configuration>
  1. 修改hdfs-site.xml,命令:
vim /home/atlas/hadoop/etc/hadoop/hdfs-site.xml

<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>file:/home/atlas/hadoop/tmp/dfs/name</value>
    </property>
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>file:/home/atlas/hadoop/tmp/dfs/data</value>
    </property>
</configuration>
  1. 修改yarn-site.xml,命令:
vim /home/atlas/hadoop/etc/hadoop/yarn-site.xml

<configuration>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
</configuration>
  1. 修改mapred-site.xml,命令:
cp /home/atlas/hadoop/etc/hadoop/mapred-site.xml.template /home/atlas/hadoop/etc/hadoop/mapred-site.xml

vim /home/atlas/hadoop/etc/hadoop/mapred-site.xml

<configuration>
    <!-- 指定MR运行在YARN上 -->
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
</configuration>
  1. 启动hadoop,命令:
hdfs namenode -format
start-dfs.sh
start-yarn.sh

五、安装mysql

  1. 删除系统自带mysql,命令:
rpm -qa|grep mariadb
rpm -e --nodeps mariadb-libs
  1. 解压压缩包,命令:tar -xvf mysql-5.7.28-1.el7.x86_64.rpm-bundle.tar
  2. 安装mysql,命令:
rpm -ivh mysql-community-common-5.7.28-1.el7.x86_64.rpm
rpm -ivh mysql-community-libs-5.7.28-1.el7.x86_64.rpm
rpm -ivh mysql-community-libs-compat-5.7.28-1.el7.x86_64.rpm
rpm -ivh mysql-community-client-5.7.28-1.el7.x86_64.rpm
rpm -ivh mysql-community-server-5.7.28-1.el7.x86_64.rpm
  1. 初始化数据库,命令:mysqld --initialize --user=mysql
  2. 查看临时生成的root 用户的密码,命令:cat /var/log/mysqld.log
  3. 启动mysql服务,命令:systemctl start mysqld
  4. 登录MySQL数据库,命令:mysql -uroot -p,之后输入之前的临时密码进入到数据库
  5. 修改密码,命令:set password = password("新密码");
  6. 修改mysql库下的user表中的root用户允许任意ip连接,命令1:update mysql.user set host='%' where user='root';,命令2:flush privileges;

六、安装hive

  1. 解压安装包,命令:
tar -xzvf apache-hive-3.1.2-bin.tar.gz -C /home/atlas/
mv apache-hive-3.1.2-bin/ hive
  1. 为Hive配置环境变量,命令:
vim /etc/profile.d/my_env.sh

export HIVE_HOME=/home/atlas/hive
export PATH=$PATH:$HIVE_HOME/bin

source /etc/profile
  1. 配置驱动,命令:cp /home/atlas/rar/3_mysql/mysql-connector-java-5.1.37.jar /home/atlas/hive/lib/
  2. 编辑hive-site.xml,命令
vim /home/atlas/hive/conf/hive-site.xml

<?xml version="1.0"?> 
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?> 
<configuration> 
    <!-- jdbc 连接的URL --> 
    <property> 
        <name>javax.jdo.option.ConnectionURL</name> 
        <value>jdbc:mysql://localhost:3306/metastore?useSSL=false&amp;useUnicode=true&amp;characterEncoding=UTF-8</value> 
    </property>
 
    <!-- jdbc 连接的Driver--> 
    <property> 
        <name>javax.jdo.option.ConnectionDriverName</name> 
        <value>com.mysql.jdbc.Driver</value> 
</property> 
 
 <!-- jdbc 连接的username--> 
    <property> 
        <name>javax.jdo.option.ConnectionUserName</name> 
        <value>root</value> 
    </property> 
 
    <!-- jdbc 连接的password --> 
    <property> 
        <name>javax.jdo.option.ConnectionPassword</name> 
        <value>970725</value> 
</property> 
 
    <!-- Hive 元数据存储版本的验证 --> 
    <property> 
        <name>hive.metastore.schema.verification</name> 
        <value>false</value> 
</property> 
 
    <!--元数据存储授权--> 
    <property> 
        <name>hive.metastore.event.db.notification.api.auth</name> 
        <value>false</value> 
    </property> 
</configuration>
  1. 修改hive-env.sh,命令:
mv /home/atlas/hive/conf/hive-env.sh.template /home/atlas/hive/conf/hive-env.sh
vim /home/atlas/hive/conf/hive-env.sh
将#export HADOOP_HEAPSIZE=1024开放
  1. 修改hive-log4j2.properties,命令:
mv /home/atlas/hive/conf/hive-log4j2.properties.template /home/atlas/hive/conf/hive-log4j2.properties
vim /home/atlas/hive/conf/hive-log4j2.properties
修改property.hive.log.dir = /home/atlas/hive/logs
  1. 登录mysql,命令:mysql -uroot -p
  2. 新建Hive元数据库后退出,命令:create database metastore;
  3. 初始化Hive元数据库,命令:schematool -initSchema -dbType mysql -verbose
  4. 配置metastore数据库编码,命令:
mysql -uroot -p
use metastore
alter table COLUMNS_V2 modify column COMMENT varchar(256) character set utf8;
alter table TABLE_PARAMS modify column PARAM_VALUE varchar(4000) character set utf8;
alter table PARTITION_PARAMS modify column PARAM_VALUE varchar(4000) character set utf8;
alter table PARTITION_KEYS modify column PKEY_COMMENT varchar(4000) character set utf8;
alter table INDEX_PARAMS modify column PARAM_VALUE varchar(4000) character set utf8;
alter table TBLS modify column view_expanded_text mediumtext character set utf8;
alter table TBLS modify column view_original_text mediumtext character set utf8;

七、安装zookeeper

  1. 解压安装包,命令:
tar -xzvf apache-zookeeper-3.5.7-bin.tar.gz -C /home/atlas/
mv apache-zookeeper-3.5.7-bin/ zookeeper
  1. 创建文件夹zkData,命令:mkdir -p /home/atlas/zookeeper/zkData
  2. 创建文件,命令:vim /home/atlas/zookeeper/zkData/myid,写入1
  3. 重命名zoo_sample.cfg文件为zoo.cfg,命令:mv /home/atlas/zookeeper/conf/zoo_sample.cfg /home/atlas/zookeeper/conf/zoo.cfg
  4. 修改zoo.cfg文件
#修改
dataDir=/home/atlas/zookeeper/zkData
#文本末尾追加
#######################cluster########################## 
server.1=hadoop01:2888:3888
server.2=hadoop02:2888:3888
server.3=hadoop03:2888:3888
  1. 启动zookeeper,命令:/home/atlas/zookeeper/bin/zkServer.sh start
  2. 停止:/home/atlas/zookeeper/bin/zkServer.sh stop
  3. 查看状态:/home/atlas/zookeeper/bin/zkServer.sh stauts

八、安装kafka

  1. 解压安装包,命令:
tar -xzvf kafka_2.11-2.4.1.tgz -C /home/atlas/
mv kafka_2.11-2.4.1/ kafka
  1. 创建logs文件夹 ,命令:mkdir -p /home/atlas/kafka/logs
  2. 修改server.properties文件,命令:vim /home/atlas/kafka/config/server.properties
#删除topic 功能使能,追加在broker.id=0后面
delete.topic.enable=true
#修改kafka运行日志存放的路径
log.dirs=/home/atlas/kafka/data
#修改配置连接Zookeeper 集群地址 
zookeeper.connect=localhost:2181/kafka
  1. 配置kafka环境变量,vim /etc/profile.d/my_env.sh
export KAFKA_HOME=/home/atlas/kafka
export PATH=$PATH:$KAFKA_HOME/bin
  1. 启动,命令:/home/atlas/kafka/bin/kafka-server-start.sh -daemon /home/atlas/kafka/config/server.properties
  2. 停止,命令:/home/atlas/kafka/bin/kafka-server-stop.sh stop

九、安装hbase

  1. 解压安装包,命令:
tar -xzvf hbase-2.0.5-bin.tar.gz -C /home/atlas/
mv hbase-2.0.5/ hbase
  1. 配置环境变量,命令:
vim /etc/profile.d/my_env.sh

export HBASE_HOME=/home/atlas/hbase
export PATH=$PATH:$HBASE_HOME/bin
  1. 修改hbase-env.sh文件,命令:vim /home/atlas/hbase/conf/hbase-env.sh
#修改
export HBASE_MANAGES_ZK=false #原来为true
  1. 修改hbase-site.xml文件,命令:vim /home/atlas/hbase/conf/hbase-site.xml
<property> 
	<name>hbase.rootdir</name> 
	<value>hdfs://localhost:9000/HBase</value> 
</property> 
<property> 
	<name>hbase.cluster.distributed</name> 
	<value>true</value> 
</property>
<property>    
	<name>hbase.zookeeper.quorum</name> 
	<value>localhost</value>
</property>
  1. 启动,命令:/home/atlas/hbase/bin/start-hbase.sh
  2. 停止:/home/atlas/hbase/bin/stop-hbase.sh

十、安装solr

  1. 解压安装包,命令:
tar -xzvf /home/atlas/rar/solr-7.7.3.tgz -C /home/atlas/
mv solr-7.7.3/ solr
  1. 创建用户,命令:useradd solr
  2. 设置密码,命令:echo solr | passwd --stdin solr
  3. 修改solr 目录的所有者为solr用户,命令:chown -R solr:solr /home/atlas/solr
  4. 修改/home/atlas/solr/bin/solr.in.sh文件,命令:
vim /home/atlas/solr/bin/solr.in.sh
ZK_HOST="localhost:2181"
  1. 启动命令:sudo -i -u solr /home/atlas/solr/bin/solr start

十一、安装atlas

1、上传压缩包并解压

  1. 解压apache-atlas-2.1.0-server.tar.gz文件,重命名为atlas
tar -xzvf /home/atlas/rar/9_atlas/apache-atlas-2.1.0-server.tar.gz -C /home/atlas/
mv apache-atlas-2.1.0/ atlas

2、Atlas集成Hbase

  1. 修改atlas/conf/atlas-application.properties配置文件,命令:vim /home/atlas/atlas/conf/atlas-application.properties
atlas.graph.storage.hostname=localhost:2181
  1. 修改atlas/conf/atlas-env.sh 配置文件,命令:vim /home/atlas/atlas/conf/atlas-env.sh
#在文件最后追加
export HBASE_CONF_DIR=/home/atlas/hbase/conf

3、Atlas集成Solr

  1. 修改atlas/conf/atlas-application.properties配置文件,命令:vim /home/atlas/atlas/conf/atlas-application.properties
#Solr 这里的注释掉
#Solr cloud mode properties
#atlas.graph.index.search.solr.mode=cloud
#atlas.graph.index.search.solr.zookeeper-url=
#atlas.graph.index.search.solr.zookeeper-connect-timeout=60000
#atlas.graph.index.search.solr.zookeeper-session-timeout=60000
#atlas.graph.index.search.solr.wait-searcher=true

#Solr http mode properties
atlas.graph.index.search.solr.mode=http
atlas.graph.index.search.solr.http-urls=http://localhost:2181/solr
  1. 复制文件,命令:cp -rf /home/atlas/atlas/conf/solr /home/atlas/solr/atlas_conf
  2. 执行下列命令
sudo -i -u solr /home/atlas/solr/bin/solr create -c vertex_index -d /home/atlas/solr/atlas_conf

4、Atlas集成Kafka

  1. 修改atlas/conf/atlas-application.properties配置文件,命令:vim /home/atlas/atlas/conf/atlas-application.properties
atlas.notification.embedded=false 
atlas.kafka.data=/home/atlas/kafka/data 
atlas.kafka.zookeeper.connect=localhost:2181/kafka 
atlas.kafka.bootstrap.servers=localhost:9092

5、Atlas Server 配置

  1. 修改atlas/conf/atlas-application.properties配置文件,命令:vim /home/atlas/atlas/conf/atlas-application.properties
atlas.server.run.setup.on.start=false
  1. 修改atlas-log4j.xml文件,命令:vim /home/atlas/atlas/conf/atlas-log4j.xml
#去掉下面代码的注释
<appender name="perf_appender" class="org.apache.log4j.DailyRollingFileAppender">
    <param name="file" value="${atlas.log.dir}/atlas_perf.log" />
    <param name="datePattern" value="'.'yyyy-MM-dd" />
    <param name="append" value="true" />
    <layout class="org.apache.log4j.PatternLayout">
        <param name="ConversionPattern" value="%d|%t|%m%n" />
    </layout>
</appender>

<logger name="org.apache.atlas.perf" additivity="false">
    <level value="debug" />
    <appender-ref ref="perf_appender" />
</logger>

6、Atlas集成Hive

  1. 修改atlas/conf/atlas-application.properties配置文件,命令:vim /home/atlas/atlas/conf/atlas-application.properties
#在文件末尾追加
######### Hive Hook Configs ####### 
atlas.hook.hive.synchronous=false 
atlas.hook.hive.numRetries=3 
atlas.hook.hive.queueSize=10000 
atlas.cluster.name=primary
  1. 修改hive-site.xml文件,命令:vim /home/atlas/hive/conf/hive-site.xml
#在configuration标签里追加
<property> 
      <name>hive.exec.post.hooks</name> 
      <value>org.apache.atlas.hive.hook.HiveHook</value> 
</property> 

7、安装Hive Hook

  1. 解压Hive Hook,命令:tar -zxvf apache-atlas-2.1.0-hive-hook.tar.gz
  2. 将Hive Hook目录里的文件依赖复制到Atlas 安装路径,命令:cp -r apache-atlas-hive-hook-2.1.0/* /home/atlas/atlas/
  3. 修改hive/conf/hive-env.sh配置文件,命令:vim /home/atlas/hive/conf/hive-env.sh
export HIVE_AUX_JARS_PATH=/home/atlas/atlas/hook/hive
  1. 将Atlas 配置文件/home/atlas/atlas/conf/atlas-application.properties 拷贝到/home/atlas/hive/conf 目录,命令:cp /home/atlas/atlas/conf/atlas-application.properties /home/atlas/hive/conf/

十二、Atlas启动

1、启动前置配置

  1. 启动Hadoop,命令:start-all.sh
  2. 启动Zookeeper,命令:/home/atlas/zookeeper/bin/zkServer.sh start
  3. 启动Kafka,命令:/home/atlas/kafka/bin/kafka-server-start.sh -daemon /home/atlas/kafka/config/server.properties
  4. 启动Hbase,命令:/home/atlas/hbase/bin/start-hbase.sh
  5. 启动Solr,命令:sudo -i -u solr /home/atlas/solr/bin/solr start

3、启动Atlas服务

  1. 进入atlas的bin目录,命令:cd /home/atlas/atlas/bin
  2. 执行启动脚本,命令:./atlas_start.py,等待2min
    在这里插入图片描述
  3. 访问hadoop01的21000端口
    在这里插入图片描述
  4. 使用默认账号登录,用户名:admin,密码:admin
  • 0
    点赞
  • 6
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值