Atlas 2.2.0源码编译及安装步骤

一、源码编译

1. 下载源码

	wget http://dlcdn.apache.org/atlas/2.2.0/apache-atlas-2.2.0-sources.tar.gz

2. 前置环境安装

  • 编译环境linux centos7.6
  • 要求 maven3.5.*+
  • jdk1.8.322
  • 尽量满足linux可以访问外网,因为需要下载jar,否则会很慢,可能会报错
  • zookeeper、hadoop、hbase、solr需要提前安装好

3. 修改版本号

  1. 修改源码中自己集群环境中版本号,【sqoop尽量不要修改,1.4.7版本与atlas兼容有问题】
  2. 如果业务库是低版本的,建议以高版本为主,atlas所支持的组件版本应该是向下兼容的
	<properties>
        <akka.version>2.3.7</akka.version>
        <antlr4.plugin.version>4.5</antlr4.plugin.version>
        <antlr4.version>4.7</antlr4.version>
        <aopalliance.version>1.0</aopalliance.version>
        <aspectj.runtime.version>1.8.7</aspectj.runtime.version>
        <atlas.surefire.options></atlas.surefire.options>
        <calcite.version>1.16.0</calcite.version>
        <checkstyle.failOnViolation>false</checkstyle.failOnViolation>
        <codehaus.woodstox.stax2-api.version>3.1.4</codehaus.woodstox.stax2-api.version>
        <commons-cli.version>1.4</commons-cli.version>
        <commons-codec.version>1.14</commons-codec.version>
        <commons-collections.version>3.2.2</commons-collections.version>
        <commons-collections4.version>4.4</commons-collections4.version>
        <commons-conf.version>1.10</commons-conf.version>
        <commons-conf2.version>2.2</commons-conf2.version>
        <commons-el.version>1.0</commons-el.version>
        <commons-io.version>2.6</commons-io.version>
        <commons-lang.version>2.6</commons-lang.version>
        <commons-logging.version>1.1.3</commons-logging.version>
        <commons-validator.version>1.6</commons-validator.version>
        <curator.version>4.3.0</curator.version>
        <doxia.version>1.8</doxia.version>
        <dropwizard-metrics>3.2.2</dropwizard-metrics>
        <elasticsearch.version>6.8.15</elasticsearch.version>
        <entity.repository.impl>org.apache.atlas.repository.audit.InMemoryEntityAuditRepository</entity.repository.impl>
        <enunciate-maven-plugin.version>2.13.2</enunciate-maven-plugin.version>
        <failsafe.version>2.18.1</failsafe.version>
        <falcon.version>0.8</falcon.version>
        <fastutil.version>6.5.16</fastutil.version>
        <graph.index.backend>solr</graph.index.backend>
        <graph.storage.backend>berkeleyje</graph.storage.backend>
        <gson.version>2.5</gson.version>
        <guava.version>25.1-jre</guava.version>
        <guice.version>4.1.0</guice.version>
        <hadoop.hdfs-client.version>${hadoop.version}</hadoop.hdfs-client.version>
        <hadoop.version>3.2.2</hadoop.version>
        <hbase.version>2.4.9</hbase.version>
        <hive.version>3.1.2</hive.version>
        <hppc.version>0.8.1</hppc.version>
        <httpcomponents-httpclient.version>4.5.13</httpcomponents-httpclient.version>
        <httpcomponents-httpcore.version>4.4.13</httpcomponents-httpcore.version>
        <jackson.databind.version>2.10.5</jackson.databind.version>
        <jackson.version>2.10.5</jackson.version>
        <janus.version>0.5.3</janus.version>
        <javax-inject.version>1</javax-inject.version>
        <javax.servlet.version>3.1.0</javax.servlet.version>
        <jersey-spring.version>1.19.4</jersey-spring.version>
        <jersey.version>1.19</jersey.version>
        <jettison.version>1.3.7</jettison.version>
        <jetty-maven-plugin.stopWait>10</jetty-maven-plugin.stopWait>
        <jetty.version>9.4.31.v20200723</jetty.version>
        <joda-time.version>2.10.6</joda-time.version>
        <json.version>3.2.11</json.version>
        <jsr.version>1.1</jsr.version>
        <junit.version>4.13</junit.version>
        <kafka.scala.binary.version>2.12</kafka.scala.binary.version>
        <kafka.version>2.8.0</kafka.version>
        <keycloak.version>6.0.1</keycloak.version>
        <log4j.version>1.2.17</log4j.version>
        <log4j2.version>2.13.3</log4j2.version>
        <lucene-solr.version>8.6.3</lucene-solr.version>
        <maven-site-plugin.version>3.7</maven-site-plugin.version>
        <MaxPermGen>512m</MaxPermGen>
        <node-for-v2.version>v12.16.0</node-for-v2.version>
        <npm-for-v2.version>6.13.7</npm-for-v2.version>
        <opencsv.version>5.0</opencsv.version>
        <paranamer.version>2.7</paranamer.version>
        <PermGen>64m</PermGen>
        <poi-ooxml.version>4.1.1</poi-ooxml.version>
        <poi.version>4.1.1</poi.version>
        <project.build.dashboardv2.gruntBuild>build-minify</project.build.dashboardv2.gruntBuild>
        <project.build.dashboardv3.gruntBuild>build-minify</project.build.dashboardv3.gruntBuild>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
        <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
        <projectBaseDir>${project.basedir}</projectBaseDir>
        <skipCheck>false</skipCheck>
        <skipDocs>true</skipDocs>
        <skipEnunciate>false</skipEnunciate>
        <skipITs>false</skipITs>
        <skipSite>true</skipSite>
        <skipTests>false</skipTests>
        <skipUTs>false</skipUTs>
        <slf4j.version>1.7.30</slf4j.version>
        <solr-test-framework.version>8.6.3</solr-test-framework.version>
        <solr.version>7.7.3</solr.version>
        <spray.version>1.3.1</spray.version>
        <spring.security.version>4.2.17.RELEASE</spring.security.version>
        <spring.version>4.3.29.RELEASE</spring.version>
        <sqoop.version>1.4.6.2.3.99.0-195</sqoop.version>
        <storm.version>2.1.0</storm.version>
        <surefire.forkCount>2C</surefire.forkCount>
        <surefire.version>2.18.1</surefire.version>
        <testng.version>6.9.4</testng.version>
        <tinkerpop.version>3.4.10</tinkerpop.version>
        <woodstox-core.version>5.0.3</woodstox-core.version>
        <zookeeper.version>3.7.0</zookeeper.version>
    </properties>

4. 修改源码中 atlas与kafka版本兼容问题

位置在:src/main/java/org/apache/atlas/kafka/EmbeddedKafkaServer.java文件中第139行中的代码

	kafkaServer = new KafkaServer(KafkaConfig.fromProps(brokerConfig), Time.SYSTEM, Option.apply(this.getClass().getName()), false);

5. 开始编译

	# 配置一下运行时jvm内存,防止内存溢出
	export MAVEN_OPTS="-Xms2g -Xmx2g -XX:MaxPermSize=512M -XX:ReservedCodeCacheSize=512m"
	# 如果使用内置的hbase 和 solr组件的话,就使用下面的编译语句
	mvn -clean -DskipTests package -Pdist,embedded-hbase-solr
	# 如果使用自己安装的组件,那就使用
	mvn -clean -DskipTests package -Pdist

6. 问题锦集

  • 包找不到
    1. 使用外网下载
    2. 自己找jar及所依赖的文件,下载后放入到自己配置的mavne仓库中,然后重新编译一下,记得把之前*.lastupdata 文件删除

二、安装步骤

编译成功后,jar存放位置在: /opt/soft/apache-atlas-sources-2.2.0/distro/target

1. 安装解压Atlas

  • 解压安装包到安装目录
    tar -zxvf /opt/soft/apache-atlas-sources-2.2.0/distro/target/apache-atlas-2.2.0-bin.tar.gz -C /opt/apps/
    

2. 集成HBase

  • 修改 $ATLAS_HOME/conf/ atlas-application.properties 配置文件
    	atlas.graph.storage.backend=hbase2
    	atlas.graph.storage.hbase.table=apache_atlas_janus
    	#Hbase
    	#For standalone mode , specify localhost
    	#for distributed mode, specify zookeeper quorum here
    	atlas.graph.storage.hostname=hadoop01:2181,hadoop02:2181,hadoop03:2181 		# zk集群地址
    	atlas.graph.storage.hbase.regions-per-server=1
    	atlas.graph.storage.lock.wait-time=10000
    
  • 修改 $ATLAS_HOME/conf/ atlas-env.sh 配置文件,【最后追加HBASE_HOME/conf配置文件】
    # 添加HBASE_CONF_DIR路径地址,也可以配置到/etc/proflie 
    export HBASE_CONF_DIR=$HBASE_HOME/conf
    

3. 集成Solr

  • 修改 $ATLAS_HOME/conf/ atlas-application.properties 配置文件
    	# Graph Search Index
    	atlas.graph.index.search.backend=solr
    	#Solr
    	#Solr cloud mode properties
    	atlas.graph.index.search.solr.mode=cloud
    	atlas.graph.index.search.solr.zookeeper-url=hadoop01:2181,hadoop02:2181,hadoop03:2181 	# zk集群地址
    	atlas.graph.index.search.solr.zookeeper-connect-timeout=60000
    	atlas.graph.index.search.solr.zookeeper-session-timeout=60000
    	atlas.graph.index.search.solr.wait-searcher=true
    

4. 集成Kafka

  • 修改 $ATLAS_HOME/conf/ atlas-application.properties 配置文件
    	#########  Notification Configs  #########
    	# 通知开关关闭
    	atlas.notification.embedded=false
    	# kafka 数据存储位置
    	atlas.kafka.data=/opt/data/kafka
    	# zk中kafka存储的位置
    	atlas.kafka.zookeeper.connect=hadoop01:2181,hadoop02:2181,hadoop03:2181/kafka
    	# kafka的集群通讯位置
    	atlas.kafka.bootstrap.servers=hadoop01:9092,hadoop02:9092,hadoop03:9092
    

5. Atlas-Server配置

  • 修改 $ATLAS_HOME/conf/ atlas-application.properties 配置文件
    #########  Server Properties  #########
    # 修改atlas WebUI访问页面
    atlas.rest.address=http://hadoop01:21000
    # If enabled and set to true, this will run setup steps when the server starts
    # 是否每次启动需要初始化
    atlas.server.run.setup.on.start=false
    #########  Entity Audit Configs  #########
    atlas.audit.hbase.tablename=apache_atlas_entity_audit
    atlas.audit.zookeeper.session.timeout.ms=1000
    # hbase zk集群地址
    atlas.audit.hbase.zookeeper.quorum=hadoop01:2181,hadoop02:2181,hadoop03:2181
    
  • 修改 log4j.xml 文件 【将以下代码解除注释,就代表这可以记录性能指标】
        <!-- Uncomment the following for perf logs -->
        <appender name="perf_appender" class="org.apache.log4j.DailyRollingFileAppender">
            <param name="file" value="${atlas.log.dir}/atlas_perf.log" />
            <param name="datePattern" value="'.'yyyy-MM-dd" />
            <param name="append" value="true" />
            <layout class="org.apache.log4j.PatternLayout">
                <param name="ConversionPattern" value="%d|%t|%m%n" />
            </layout>
        </appender>
        <logger name="org.apache.atlas.perf" additivity="false">
            <level value="debug" />
            <appender-ref ref="perf_appender" />
        </logger>
    

6. 集成Hive

  • 修改 $ATLAS_HOME/conf/ atlas-application.properties 配置文件

    	#########  Hive Hook Configs ########
    	# hive自动同步测试 
    	atlas.hook.hive.synchronous=false
    	# 重试次数3
    	atlas.hook.hive.numRetries=3
    	# 队列大小10000
    	atlas.hook.hive.queueSize=10000
    	# 集群名称
    	atlas.cluster.name=primary
    
  • 修改Hive的配置文件【$HIVE_HOME/conf/hive-site.xml】,在后面追加上,代表着hive启动hook的钩子程序

    	<property>
    	    <name>hive.exec.post.hooks</name>
    	    <value>org.apache.atlas.hive.hook.HiveHook</value>
    	</property>
    
  • 解压apache-atlas-2.1.0-hive-hook.tar.gz文件,然后将解压后中的文件放入atlas安装目录中

    	tar -zxvf apache-atlas-2.1.0-hive-hook.tar.gz
    	cd apache-atlas-2.1.0-hive-hook
    	cp -r ./* $ATLAS_HOME
    
  • 修改$HIVE_HOME/conf/hive-env.sh 文件,将hook的文件路径写入到配置当中

    	export HIVE_AUX_JARS_PATH=$ATLAS_HOME/hook/hive
    
  • 将atlas的配置文件复制到hive的conf文件中

    • 【如果用软连接,会涉及到账号权限问题,这里简单的使用可文件拷贝,避免权限问题】
    	cp $ATLAS_HOME/conf/atlas-application.properties $HIVE_HOME/conf/
    

7. 集成Sqoop[待完善]

  • 0
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值