flink学习之旅(二)

  目前flink中的资源管理主要是使用的hadoop圈里的yarn,故此需要先搭建hadoop环境并启动yarn和hdfs,由于看到的教程都是集群版,现实是只有1台机器,故此都是使用这台机器安装。

1.下载对应hadoop安装包

https://dlcdn.apache.org/hadoop/common/hadoop-3.3.5/hadoop-3.3.5.tar.gz

2.解压到指定路径比如这里我选择的如下:

3.修改hadoop相关配置

cd  /root/dxy/hadoop/hadoop-3.3.5/etc/hadoop

vi core-site.xml  核心配置文件

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
<!-- 指定NameNode的地址 -->
        <property>
                <name>fs.defaultFS</name>
                <value>hdfs://10.26.141.203:8020</value>
        </property>
        <!-- hadoop数据的存储目录 -->
        <property>
                <name>hadoop.tmp.dir</name>
                <value>/root/dxy/hadoop/hadoop-3.3.5/datas</value>
        </property>
       <property>
        <name>hadoop.http.staticuser.user</name>
        <value>root</value>
    </property>
</configuration>

 vi  hdfs-site.xml 修改hdfs配置文件


<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
<!-- 指定NameNode的外部web访问地址 -->
        <property>
                <name>dfs.namenode.http-address</name>
                <value>10.26.141.203:9870</value>
        </property>
        <!-- 指定secondary NameNode的外部web访问地址 -->
        <property>
                <name>dfs.namenode.secondary.http-address</name>
                <value>10.26.141.203:9868</value>
        </property>
</configuration>

vi yarn-site.xml 修改yarn配置

<?xml version="1.0"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->
<configuration>

<!-- Site specific YARN configuration properties -->
<!--指定MR走shuffle -->
        <property>
                <name>yarn.nodemanager.aux-services</name>
                <value>mapreduce_shuffle</value>
        </property>
        <!--指定ResourceManager的地址-->
        <property>
                <name>yarn.resourcemanager.hostname</name>
                <value>10.26.141.203</value>
        </property>
        <!--环境变量的继承-->
        <property>
                <name>yarn.nodemanager.env-whitelist</name>
                <value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value>
        </property>
        </configuration>
~                             

 vim mapred-site.xml  

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>
	<!-- 指定MapReduce程序运行在Yarn上 -->
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
</configuration>

vi workers 填上 服务器ip 如:

10.26.141.203

修改环境变量:

export HADOOP_HOME=/root/dxy/hadoop/hadoop-3.3.5
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin

export HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop
export HADOOP_CLASSPATH=`hadoop classpath`

source /etc/profile

完成后可以执行hadoop version 查看是否配置成功如下:

hdfs第一次启动时需要格式化

hdfs namenode -format

启动yarn和hdfs 

 sbin/start-dfs.sh 

        sbin/start-yarn.sh

使用界面访问是否启动成功:

使用application运行flink任务并提交到yarn上如下:

bin/flink run-application -t yarn-application -c com.dxy.learn.demo.SocketStreamWordCount FlinkLearn-1.0-SNAPSHOT.jar 

  • 8
    点赞
  • 11
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

焱童鞋

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值