ansible-playbook搭建hadoop

前置条件:环境准备请查看ansible环境准备

        初学ansible-playbook,很多模块编写质量不高,后续逐渐完善。

        本次搭建主要目的是为了通过hadoop的安装来学习锻炼ansible的yml编写能力,非新手可直接跳过第二步,直接查看最后的完整yml。

一、通过网上hadoop搭建的教程,进行搭建过程梳理,大致如下(其中除3、4步为全节点都需要外,其余步骤皆在master节点上完成即可)

        1、安装包下载解压

        2、hadoop相关文件配置

        3、bash_profile配置与生效

        4、jdk1.8安装与配置

        5、格式化hadoop文件系统

二、根据所梳理出的步骤逐个编写yml模块

        1、使用get_url进行安装包下载(master节点操作)

        {{变量名}}:对应vars里定义的内容,可方便版本定义

        url:下载源地址

        dest:下次存放目录位置

get_url: url=http://archive.apache.org/dist/hadoop/core/hadoop-{{hadoop_version}}/hadoop-{{hadoop_version}}.tar.gz dest=/tmp

        2、使用unarchive进行文件解压(master节点操作)

        src:压缩包所在位置

        dest:解压目录位置

        copy:是否先将文件复制到远程主机,默认为yes,若为no

unarchive:
        src: /tmp/hadoop-{{hadoop_version}}.tar.gz
        dest: "{{hadoop_install_dir}}"
        copy: no

        3、使用shell进行目录名称重命名(master节点操作)

        &&:多行命令之间的分隔符

shell: cd {{hadoop_install_dir}} &&
             mv hadoop-{{hadoop_version}} hadoop

        4、hadoop各参数文件配置,这里可根据个人习惯使用lineinfile或shell直接写入,由于配置文件基本是在<configuration></configuration>标签里编写,以下先以lineinfile追加core-site.xml和替换slaves为例(master节点操作)

        path:所编写文件位置

        line:写入内容

        insertafter:写入在什么内容后

        regexp:替换内容正则

lineinfile:
        path: /usr/local/hadoop/etc/hadoop/core-site.xml
        line: <property><name>fs.defaultFS</name><value>hdfs://master:9000</value></property><property><name>hadoop.tmp.dir</name><value>/opt/hadoop/hadoopdata</value></property>
        insertafter: '<configuration>'
lineinfile:
        path: /usr/local/hadoop/etc/hadoop/slaves
        line: node1
        regexp: '^localhost'

        5、将配置好的hadoop安装目录通过scp下发至子节点(master节点操作)

shell: scp -r /usr/local/hadoop root@node1:/usr/local

        6、使用shell配合with_items进行.bask_profile文件多行追加配置(所有节点操作)

        with_items:每一行代表一次写入操作,shell中使用{{item}}作为变量占位符

shell: /bin/echo {{item}} >> ~/.bash_profile
      with_items:
        - export HADOOP_HOME=/usr/local/hadoop
        - export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH

        7、安装与配置jdk1.8,由于Oracle网站下载tar包需要登录,此处不适用get_url进行下载,提前准备好安装包存放至/opt/java/目录下(所有节点操作)

    - name: "解压jdk1.8"
      unarchive:
        src: /opt/java/jdk-8u291-linux-x64.tar.gz
        dest: /usr/local
        copy: no

    - name: "profile配置"
      lineinfile:
        path: /usr/local/hadoop/etc/hadoop/hadoop-env.sh
        line: export JAVA_HOME=/usr/local/jdk1.8.0_291
        regexp: '^export JAVA_HOME=\S*'

    - name: "jps软连接创建"
      shell: cd /usr/local/bin/ &&
             ln -s /usr/local/jdk1.8.0_291/bin/jps jps

        8、创建数据目录并格式化hadoop文件系统(master节点操作)

    - name: "创建数据目录"
      shell: mkdir -p /opt/hadoop/hadoopdata
    - name: "格式化文件系统"
      shell: /usr/local/hadoop/bin/hadoop namenode -format

三、完整yml文件

---
- hosts: master
  gather_facts: no
  vars:
    hadoop_version: 2.7.5
    hadoop_install_dir: /usr/local
    node1: node1
    node2: node2

  tasks:
    - name: "下载 hadoop"
      get_url: url=http://archive.apache.org/dist/hadoop/core/hadoop-{{hadoop_version}}/hadoop-{{hadoop_version}}.tar.gz dest=/tmp

    - name: "解压 hadoop"
      unarchive:
        src: /tmp/hadoop-{{hadoop_version}}.tar.gz
        dest: "{{hadoop_install_dir}}"
        copy: no

    - name: "配置 hadoop"
      shell: cd {{hadoop_install_dir}} &&
             mv hadoop-{{hadoop_version}} hadoop

    - name: "core-site.xml文件配置"
      lineinfile:
        path: /usr/local/hadoop/etc/hadoop/core-site.xml
        line: <property><name>fs.defaultFS</name><value>hdfs://master:9000</value></property><property><name>hadoop.tmp.dir</name><value>/opt/hadoop/hadoopdata</value></property>
        insertafter: '<configuration>'

    - name: "hdfs-site.xml文件配置"
      lineinfile:
        path: /usr/local/hadoop/etc/hadoop/hdfs-site.xml
        line: <property><name>dfs.replication</name><value>1</value></property>
        insertafter: '<configuration>'

    - name: "yarn-site.xml文件配置"
      lineinfile:
        path: /usr/local/hadoop/etc/hadoop/yarn-site.xml
        line: <property><name>yarn.nodemanager.aux-services</name><value>mapreduce_shuffle</value></property><property> <name>yarn.resourcemanager.address</name><value>master:18040</value></property><property><name>yarn.resourcemanager.scheduler.address</name><value>master:18030</value></property><property><name>yarn.resourcemanager.resource-tracker.address</name><value>master:18025</value></property><property><name>yarn.resourcemanager.admin.address</name><value>master:18141</value></property><property><name>yarn.resourcemanager.webapp.address</name><value>master:18088</value></property>
        insertafter: '<configuration>'

    - name: "复制模板文件"
      shell: cp /usr/local/hadoop/etc/hadoop/mapred-site.xml.template /usr/local/hadoop/etc/hadoop/mapred-site.xml

    - name: "mapred-site.xml文件配置"
      lineinfile:
        path: /usr/local/hadoop/etc/hadoop/mapred-site.xml
        line: <property><name>mapreduce.framework.name</name><value>yarn</value></property>
        insertafter: '<configuration>'

    - name: "slaves文件配置"
      lineinfile:
        path: /usr/local/hadoop/etc/hadoop/slaves
        line: node1
        regexp: '^localhost'

    - name: "slaves文件配置"
      lineinfile:
        path: /usr/local/hadoop/etc/hadoop/slaves
        line: node2
        insertafter: 'node1'

    - name: "下发子节点1"
      shell: scp -r /usr/local/hadoop root@{{node1}}:/usr/local

    - name: "下发子节点2"
      shell: scp -r /usr/local/hadoop root@{{node2}}:/usr/local

- hosts: all
  gather_facts: no

  tasks:
    - name: "bash_profile配置"
      shell: /bin/echo {{item}} >> ~/.bash_profile
      with_items:
        - export HADOOP_HOME=/usr/local/hadoop
        - export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH

    - name: "bash_profile配置生效"
      shell: source ~/.bash_profile


    - name: "解压jdk1.8"
      unarchive:
        src: /opt/java/jdk-8u291-linux-x64.tar.gz
        dest: /usr/local
        copy: no

    - name: "profile配置"
      lineinfile:
        path: /usr/local/hadoop/etc/hadoop/hadoop-env.sh
        line: export JAVA_HOME=/usr/local/jdk1.8.0_291
        regexp: '^export JAVA_HOME=\S*'

    - name: "jps软连接创建"
      shell: cd /usr/local/bin/ &&
             ln -s /usr/local/jdk1.8.0_291/bin/jps jps

- hosts: master
  gather_facts: no

  tasks:
    - name: "创建数据目录"
      shell: mkdir -p /opt/hadoop/hadoopdata
    - name: "格式化文件系统"
      shell: /usr/local/hadoop/bin/hadoop namenode -format

四、通过start-all.sh启动并通过jps进行验证

  • 4
    点赞
  • 11
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值