On Flink Standalone部署

本文主要讲解On Flink Standalone的部署


Flink部署以及作业提交

1、前置准备:

用于编译源码的机器最好满足如下配置:

  • CPU > 4核
  • 内存 > 8G
  • Note:我这里使用的机器配置是4核8G,如果内存太小编译环节会发生OOM(OOM:out of Memary 超出内存)

2 、安装JDK(我本地安装的是1.8)

[root@hadoop2 conf]# java -version

java version "1.8.0_131"

Java(TM) SE Runtime Environment (build 1.8.0_131-b11)

Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)

 

3、 安装Maven

由于我们选择的是源码编译的方式安装Flink,所以还需要提前安装好apache Maven

注:此处安装apache maven 3.6.3

Maven 是一个项目管理和构建自动化工具。但是对于我们程序员来说,我们最关心的是它的项目构建功能。所以这里我们介绍的就是怎样用 maven 来满足我们项目的日常需要。

(1)下载

[root@hadoop1 src]# wget https://mirror.bit.edu.cn/apache/maven/maven-3/3.6.3/binaries/apache-maven-3.6.3-bin.tar.gz

(2)解压

[root@hadoop1 src]# tar -zxf apache-maven-3.6.3-bin.tar.gz

 

(3)创建maven 文件夹

[root@hadoop1 src]# mkdir maven

[root@hadoop1 src]# mv apache-maven-3.6.3/ maven/

[root@hadoop1 apache-maven-3.6.3]# pwd

/usr/local/src/maven/apache-maven-3.6.3

 

(4)在maven文件夹中创建maven仓库文件夹

[root@hadoop1 maven]# mkdir repository

[root@hadoop1 maven]# ls

apache-maven-3.6.3  repository

 

(5)配置环境变量

vim /etc/profile



export MAVEN_HOME=/usr/local/src/maven/apache-maven-3.6.3

export PATH=$PATH:$MAVEN_HOME/bin

[root@hadoop1 apache-maven-3.6.3]# source /etc/profile

(6)修改配置文件setting.xml

[root@hadoop1 conf]# vi /usr/local/src/maven/apache-maven-3.6.3/conf/settings.xml

  <!-- localRepository





   | The path to the local repository maven will use to store artifacts.

   |

   | Default: ${user.home}/.m2/repository

  <localRepository>/path/to/local/repo</localRepository>

  -->

  <localRepository>/usr/local/src/maven/repository</localRepository>





<mirrors>

    <!-- mirror

     | Specifies a repository mirror site to use instead of a given repository. The repository that

     | this mirror serves has an ID that matches the mirrorOf element of this mirror. IDs are used

     | for inheritance and direct lookup purposes, and must be unique across the set of mirrors.

     |

    <mirror>

      <id>mirrorId</id>

      <mirrorOf>repositoryId</mirrorOf>

      <name>Human Readable Name for this Mirror.</name>

      <url>http://my.repository.com/repo/path</url>

    </mirror>

     -->

  </mirrors>

    <mirror>

        <id>alimaven</id>

        <name>aliyun maven</name>

        <url>http://maven.aliyun.com/nexus/content/groups/public/</url>

        <mirrorOf>central</mirrorOf>

</mirror>

 

(7)查看是否安装成功

[root@hadoop1 conf]# mvn -version

Apache Maven 3.6.3 (cecedd343002696d0abb50b32b541b8a6ba2883f)

Maven home: /usr/local/src/maven/apache-maven-3.6.3

Java version: 1.8.0_131, vendor: Oracle Corporation, runtime: /usr/java/jdk1.8.0_131/jre

Default locale: en_US, platform encoding: UTF-8

OS name: "linux", version: "3.10.0-1127.el7.x86_64", arch: "amd64", family: "unix"

4 、安装NodeJS

Flink有个web-dashboard项目的编译需要依赖于NodeJS,所以也需要事先安装好Node.js 是一个基于 Chrome V8 引擎的 JavaScript 运行环境。Node.js使用了一个事件驱动、非阻塞式 I/O 的模型,使其轻量又高效。

 

(1)下载安装

[root@hadoop1 conf]# wget https://nodejs.org/dist/v12.8.1/node-v12.8.1-linux-x64.tar.gz

(2)解压

[root@hadoop1 conf]# tar -zxf node-v12.8.1-linux-x64.tar.gz -C /usr/local/

[root@hadoop1 local]# mv node-v12.8.1-linux-x64/ nodejs

 

(3)做软链接

[root@hadoop1 local]# ln -s /usr/local/nodejs/bin/node  /usr/local/bin/

[root@hadoop1 local]# ln -s /usr/local/nodejs/bin/npm  /usr/local/bin/

[root@hadoop1 local]# cd /usr/local/bin/

[root@hadoop1 bin]# ll

total 0

lrwxrwxrwx 1 root root 26 Nov 13 04:29 node -> /usr/local/nodejs/bin/node

lrwxrwxrwx 1 root root 25 Nov 13 04:29 npm -> /usr/local/nodejs/bin/npm

 

(4)查看版本

 

[root@hadoop1 bin]# pwd

/usr/local/nodejs/bin

[root@hadoop1 bin]# node -v

v12.8.1

 

5、安装angular的cli工具

[root@hadoop1 bin]# npm  install -g -registry=https://registry.npm.taobao.org @angular/cli

(1)设置运行路径

[root@hadoop1 bin]# npm config list

; cli configs

metrics-registry = "https://registry.npmjs.org/"

scope = ""

user-agent = "npm/6.10.2 node/v12.8.1 linux x64"



; userconfig /root/.npmrc

prefix = "/root/.npm-packages"



; node bin location = /usr/local/nodejs/bin/node

; cwd = /usr/local/nodejs/bin

; HOME = /root

; "npm config ls -l" to show all defaults.







[root@hadoop1 bin]# ln -s /root/.npm-packages/bin/ng /usr/local/bin/ng

 

(2)查看版本信息

 

[root@hadoop1 bin]# ng version



     _                      _                 ____ _     ___

    / \   _ __   __ _ _   _| | __ _ _ __     / ___| |   |_ _|

   / △ \ | '_ \ / _` | | | | |/ _` | '__|   | |   | |    | |

  / ___ \| | | | (_| | |_| | | (_| | |      | |___| |___ | |

 /_/   \_\_| |_|\__, |\__,_|_|\__,_|_|       \____|_____|___|

                |___/

    



Angular CLI: 11.0.1

Node: 12.8.1

OS: linux x64



Angular:

...

Ivy Workspace:



Package                      Version

------------------------------------------------------

@angular-devkit/architect    0.1100.1 (cli-only)

@angular-devkit/core         11.0.1 (cli-only)

@angular-devkit/schematics   11.0.1 (cli-only)

@schematics/angular          11.0.1 (cli-only)

@schematics/update           0.1100.1 (cli-only)

    

6 安装hadoop-2.6.0-cdh5.16.2 

(1)下载地址:

wget http://archive.cloudera.com/cdh5/cdh/5/hadoop-2.6.0-cdh5.16.2.tar.gz

 

(2)解压

[root@hadoop1 src]# tar -zxf hadoop-2.6.0-cdh5.16.2.tar.gz

 

(3)配置环境变量
 

[root@hadoop1 src]# vim /etc/profile


export HADOOP_HOME=/usr/local/src/hadoop-2.6.0-cdh5.16.2export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin



[root@hadoop1 src]# source /etc/profile

 

(4)修改配置文件 hadoop-env.sh

 

[root@hadoop1 hadoop-2.6.0-cdh5.16.2]# cd $HADOOP_HOME/

[root@hadoop1 hadoop-2.6.0-cdh5.16.2]#  vim etc/hadoop/hadoop-env.sh

export JAVA_HOME=/usr/java/jdk1.8.0_131/

(5)修改配置文件core-site.xml

[root@hadoop1 hadoop-2.6.0-cdh5.16.2]# vim etc/hadoop/core-site.xml



<configuration>

<property>

    <name>fs.defaultFS</name>

    <value>hdfs://hadoop1:8020</value>

  </property>

</configuration>

 

(6)修改配置文件hdfs-site.xml

[root@hadoop1 hadoop-2.6.0-cdh5.16.2]# vim etc/hadoop/hdfs-site.xml





<configuration>

  <property>

    <name>dfs.replication</name>

    <value>1</value>

  </property>

  <property>

    <name>hadoop.tmp.dir</name>

    <value>/data/hadoop/tmp</value>

  </property>

</configuration>

  1. (7)修改slave节点的IP或者hostname
[root@hadoop1 /usr/local/hadoop-2.6.0-cdh5.16.2]# vim etc/hadoop/slaves

hadoop1

(8)配置 yarn

[root@hadoop1 /usr/local/hadoop-2.6.0-cdh5.16.2]# vim etc/hadoop/yarn-site.xml

<configuration>

  <property>

    <name>mapreduce.framework.name</name>

    <value>yarn</value>

  </property>

</configuration>

(9)配置MapReduce

[root@hadoop1 /usr/local/hadoop-2.6.0-cdh5.16.2]# cp etc/hadoop/mapred-site.xml.template etc/hadoop/mapred-site.xml

[root@hadoop1 /usr/local/hadoop-2.6.0-cdh5.16.2]# vim etc/hadoop/mapred-site.xml

<configuration>

  <property>

    <name>yarn.nodemanager.aux-services</name>

    <value>mapreduce_shuffle</value>

  </property>

</configuration>

(10)创建hadoop的临时目录

[root@hadoop1 /usr/local/hadoop-2.6.0-cdh5.16.2]# mkdir -p /data/hadoop/tmp

 

11应用HDFS的配置:

[root@hadoop1/usr/local/hadoop-2.6.0-cdh5.16.2]# ./bin/hdfs namenode -format

(12)启动所有组件:

[root@hadoop1/usr/local/hadoop-2.6.0-cdh5.16.2]# ./sbin/start-all.sh

启动成功后查看进程:

[root@hadoop1 ~]# jps

3344 SecondaryNameNode

2722 NameNode

3812 Jps

3176 DataNode

3578 NodeManager

3502 ResourceManager

然后在浏览器中访问HDFS的web界面,默认端口是50070:

http://192.168.56.77:50070

 

接着访问HDFS的YARN界面,默认端口是8088:

http://192.168.56.77:8088

 

 

 

(13)测试HDFS是否正常读写

[root@hadoop1 ~]# hadoop fs -put anaconda-ks.cfg /

[root@hadoop1 ~]# hadoop fs -ls /

-rw-r--r--   1 root supergroup       1328 2020-11-17 21:34 /anaconda-ks.cfg

 

 

 

 

 

 

 

 

 

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值