Ambari(Install)

Ambari Server(backend)

    Server code: Java 1.7 / 1.8
    Agent scripts: Python
    Database: Postgres, Oracle, MySQL
    ORM: EclipseLink
    Security: Spring Security with remote LDAP integration and local database
    REST server: Jersey (JAX-RS)
    Dependency Injection: Guice
    Unit Testing: JUnit
    Mocks: EasyMock
    Configuration management: Python


Ambari Web(frontend)

    Frontend code: JavaScript
    Client-side MVC framework: Ember.js / AngularJS
    Templating: Handlebars.js (integrated with Ember.js)
    DOM manipulation: jQuery
    Look and feel: Bootstrap 2
    CSS preprocessor: LESS
    Unit Testing: Mocha
    Mocks: Sinon.js
    Application assembler/tester: Brunch / Grunt / Gulp

 

machines(单机28 Core,64G Ram,200G Disk)

  list by ip: 192.168.11.72
              192.168.11.73
              192.168.11.74
              192.168.11.75
              192.168.11.76
  or list by name:
              bigdataserver0
              bigdataserver1
              bigdataserver2
              bigdataserver3
              bigdataserver4

 

environemnt

  os version: ubuntu 14.04

  python version:2.7.6
  java version:1.8
  scala version:2.11

  ambari version:2.6.0
  hadoop version:2.6.5
  spark version:2.2.0

ansible

  (master)sudo apt-get install sshpass
  (master)sudo apt-get install ansible
  (master)sudo vi /etc/ansible/hosts
    [hadoop]
    192.168.11.[72:76]
    [spark]
    192.168.11.[72:76]

user/sudoer 

  (all)sudo useradd bigdata -d /home/bigdata -s /bin/bash -m
  (all)sudo passwd bigdata
  (all)sudo usermod -G root bigdata
  (all)bigdata to /etc/sudoers
      bigdata ALL=(ALL) ALL

hosts

  (all)sudo nano /etc/hosts
    192.168.11.72 bigdataserver0
    192.168.11.73 bigdataserver1
    192.168.11.74 bigdataserver2
    192.168.11.75 bigdataserver3
    192.168.11.76 bigdataserver4
  (master)ansible hadoop -m copy -a "src=/etc/hosts dest=/etc/hosts/" -u bigdata -k --sudo -K

hostnames

  (all)sudo nano /etc/hostname

ssh

  (master)ansible hadoop -m shell -a "sudo apt-get install -y ssh" -u bigdata -k --sudo -K
  (master)ansible hadoop -m shell -a "sudo apt-get install -y rsync" -u bigdata -k --sudo -K
  (master)su bigdata
  (master)cd ~
  (master)ssh-keygen
  (master)cat .ssh/id_rsa.pub >> .ssh/authorized_keys
  (master)ansible hadoop -m command -a "mkdir ~/.ssh" -u bigdata -k -vv
  (master)scp  ~/.ssh/authorized_keys bigdata@192.168.11.72:~/.ssh/
  (master)scp  ~/.ssh/authorized_keys bigdata@192.168.11.73:~/.ssh/
  (master)scp  ~/.ssh/authorized_keys bigdata@192.168.11.74:~/.ssh/
  (master)scp  ~/.ssh/authorized_keys bigdata@192.168.11.75:~/.ssh/
  (master)scp  ~/.ssh/authorized_keys bigdata@192.168.11.76:~/.ssh/

selinux/apparmor

  (master)ansible hadoop -m shell -a "sudo apt-get install -y policycoreutils" -u bigdata -k --sudo -S -K
  (master)ansible hadoop -m shell -a "sestatus" -u bigdata -k --sudo -K
  (master)ansible hadoop -m shell -a "apparmor_status" -u bigdata -k --sudo -K

iptables

 (master)ansible hadoop -m shell -a "sudo service iptables stop" -u bigdata -k --sudo -K

mm

  (all)echo never|sudo tee /sys/kernel/mm/transparent_hugepage/defrag
  (all)echo never|sudo tee /sys/kernel/mm/transparent_hugepage/enabled

zone

  cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime

ntp

  (master)ansible hadoop -m shell -a "sudo apt-get install -y ntp" -u bigdata -k --sudo -K
  (master)ansible hadoop -m shell -a "date -R" -u bigdata -k
  (all)crontab -e                             #如果时间同步问题,可不设置
    0-59/10 * * * * /usr/sbin/ntpdate bigdataserver0 | logger -t NTP

database(if not using default PostgreSQL, default postgresql will be installed default with ambari-server)

 (master) install one dbms server PostgreSQL/Oracle/MySQL/MariaDB/MSServer/other_SQL_server/
  (master) ambari-server setup --jdbc-db=postgres --jdbc-driver=/usr/lib/ambari-server/postgresql-9.3-1101-jdbc4.jar
  (master)
        su - postgres  or sudo su - postgres
        psql -U postgres
            postgres=#alter user postgres with password 'bigdata';

            postgres=#create user hive with password 'introcks1234';
            postgres=#create database hive;
            postgres=#grant all privileges on database hive to hive;

            postgres=#create user oozie with password 'introcks1234';
            postgres=#create database oozie;
            postgres=#grant all privileges on database oozie to oozie;

            postgres=#create user superset with password 'introcks1234';
            postgres=#create database superset;
            postgres=#grant all privileges on database superset to superset;

            postgres=#\q
        sudo service postgresql restart

yum/apt(local http repository if download&install):

  (master)sudo apt-get install nginx
  (master)cd /usr/share/nginx/html/;sudo mkdir ambari; cd /usr/share/nginx/html/ambari
  (master)sudo apt-get install yum yum-utils createrepo
  (master)wget http://public-repo-1.hortonworks.com/ambari/ubuntu14/2.x/updates/2.6.0.0/ambari-2.6.0.0-ubuntu14.tar.gz
  (master)tar -zxvf ambari-2.6.0.0-ubuntu14.tar.gz
  (master)wget http://public-repo-1.hortonworks.com/HDP/ubuntu14/2.x/updates/2.6.3.0/HDP-2.6.3.0-ubuntu14-deb.tar.gz
  (master)tar -zxvf HDP-2.6.3.0-ubuntu14-deb.tar.gz
  (master)cd /usr/share/nginx/html/ambari;sudo mkdir HDP-UTILS;cd cd /usr/share/nginx/html/ambari/HDP-UTILS
  (master)wget http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/ubuntu14/HDP-UTILS-1.1.0.21-ubuntu14.tar.gz
  (master)tar -zxvf HDP-UTILS-1.1.0.21-ubuntu14.tar.gz
  (master)sudo mkdir /usr/share/nginx/html/ARTIFACTS;cd /usr/share/nginx/html/ARTIFACTS
  (master)wget http://public-repo-1.hortonworks.com/ARTIFACTS/jdk-8u112-linux-x64.tar.gz
  (master)cd ~

ambari & hadoop repquirements
 

  ssh安装(all:sudo apt-get install ssh;sudo apt-get install rsync)
  ssh的无密码登录(all)
  Java>=6
  Python>=2.6
  echo never > /sys/kernel/mm/transparent_hugepage/defrag
  echo never > /sys/kernel/mm/transparent_hugepage/enabled
  (echo never|sudo tee /sys/kernel/mm/transparent_hugepage/defrag)
  (echo never|sudo tee /sys/kernel/mm/transparent_hugepage/enabled)
  iptables down(master:sudo service iptables stop)
  ntp start (master:sudo apt-get install ntp;sudo service ntp start)
  git(if build&install)
  maven(if build&install)
  Node.js(if build&install)
  rpm(if build&install)
  yum/apt(if build&install)
  brunch(if build&install)

ambari(HadoopProvision/Manage/Monitor/Alarm)ambari build&install
  deps(all):

  git clone https://git-wip-us.apache.org/repos/asf/ambari.git
  cd ambari
  git pull
  cd ambary-web
  sudo npm install -g brunch
  nano jms-1.1.pom
  nano jmxtools-1.2.1.pom
  nano jmxri-1.2.1.pom
  mvn install:install-file -Dfile=jms-1.1.pom -DgroupId=javax.jms -DartifactId=jms -Dversion=1.1 -Dpackaging=jar
  mvn install:install-file -Dfile=jmxtools-1.2.1.pom -DgroupId=com.sun.jdmk -DartifactId=jmxtools -Dversion=1.2.1 -Dpackaging=jar
  mvn install:install-file -Dfile=jmxri-1.2.1.pom -DgroupId=com.sun.jmx -DartifactId=jmxri -Dversion=1.2.1 -Dpackaging=jar

  mvn -B clean install package jdeb:jdeb -DnewVersion=2.6.0.0.0 -DskipTests -Dpython.ver="python >= 2.6"

  ambari-agent(node):

  sudo apt-get install -y ambari-agent*.rpm
  sudo nano /etc/ambari-agent/conf/ambari-agent.ini
    hostname=bigdataserver0
    run_as_user=bigdata
  sudo service ambari-agent start

  mbari-server(master):

  sudo apt-get install -y libmysql-java
  sudo apt-get install -y ambari-server*.deb
  sudo ambari-server setup
  sudo nano /etc/ambari-server/conf/ambari.properties
    ambari-server.user=bigdata
  sudo service ambari-server start


ambari network&install
  deps(all):

  cd /etc/apt/sources.list.d/
  sudo wget http://public-repo-1.hortonworks.com/ambari/ubuntu14/2.x/updates/2.6.0.0/ambari.list
  (
  #VERSION_NUMBER=2.6.0.0-267
  deb http://public-repo-1.hortonworks.com/ambari/ubuntu14/2.x/updates/2.6.0.0 Ambari main
  )
  sudo wget http://public-repo-1.hortonworks.com/HDP/ubuntu14/2.x/updates/2.6.3.0/hdp.list
  (
  #VERSION_NUMBER=2.6.3.0-235
  deb http://public-repo-1.hortonworks.com/HDP/ubuntu14/2.x/updates/2.6.3.0 HDP main
  deb http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/ubuntu14 HDP-UTILS main
  )

  wget http://public-repo-1.hortonworks.com/ARTIFACTS/jdk-8u112-linux-x64.tar.gz
  sudo mv jdk-8u112-linux-x64.tar.gz /var/lib/ambari-server/resources/jdk-8u112-linux-x64.tar.gz

  sudo apt-get clean
  (ansible hadoop -m shell -a "sudo apt-get clean" -u bigdata -k --sudo -K)
  sudo apt-get update
  (ansible hadoop -m shell -a "sudo apt-get update" -u bigdata -k --sudo -K)
  sudo apt-get install -y libmysql-java
  (ansible hadoop -m shell -a "apt-get install -y libmysql-java" -u bigdata -k --sudo -K)

  ambari-agent(node):

  wget http://public-repo-1.hortonworks.com/ambari/ubuntu14/2.x/updates/2.6.0.0/pool/main/a/ambari-agent/ambari-agent_2.6.0.0-267_amd64.deb
  sudo mv ambari-agent_2.6.0.0-267_amd64.deb /var/cache/apt/archives/ambari-agent_2.6.0.0-267_amd64.deb
  (ansible hadoop -m copy -a "src=/var/cache/apt/archives/ambari-agent_2.6.0.0-267_amd64.deb dest=/var/cache/apt/archives/" -u bigdata -k --sudo -K)
  sudo apt-get update
  (ansible hadoop -m shell -a "sudo apt-get update" -u bigdata -k --sudo -K)
  sudo apt-get install ambari-agent
  (ansible hadoop -m shell -a "sudo apt-get install -y --force-yes ambari-agent" -u bigdata -k --sudo -K)
  sudo nano /etc/ambari-agent/conf/ambari-agent.ini
    hostname=bigdataserver0
    run_as_user=bigdata
  sudo service ambari-agent start

  ambari-server(master):

  wget http://public-repo-1.hortonworks.com/ambari/ubuntu14/2.x/updates/2.6.0.0/pool/main/a/ambari-server/ambari-server_2.6.0.0-267_amd64.deb
  sudo mv ambari-server_2.6.0.0-267_amd64.deb /var/cache/apt/archives/ambari-server_2.6.0.0-267_amd64.deb
  sudo apt-get install ambari-server
  sudo ambari-server setup
  sudo nano /etc/ambari-server/conf/ambari.properties
    ambari-server.user=bigdata
  sudo service ambari-server start


ambari download&install [poc阶段采用]
  deps(master):

  cd /etc/apt/sources.list.d/
  sudo nano ambari.list
  (
  #VERSION_NUMBER=2.6.0.0-267
  deb http://192.168.11.72/ambari/ambari/ubuntu14/2.6.0.0-267 Ambari main
  )
  sudo nano hdp.list
  (
  #VERSION_NUMBER=2.6.3.0-235
  deb http://192.168.11.72/ambari/HDP/ubuntu14/2.6.3.0-235 HDP main
  deb http://192.168.11.72/ambari/HDP-UTILS HDP-UTILS main
  )
  sudo apt-get clean
  sudo apt-get update

  wget http://192.168.11.72/ARTIFACTS/jdk-8u112-linux-x64.tar.gz
  sudo mv jdk-8u112-linux-x64.tar.gz /var/lib/ambari-server/resources/jdk-8u112-linux-x64.tar.gz

  ambari-agent(node):

  wget http://192.168.11.72/ambari/ambari/ubuntu14/2.6.0.0-267/pool/main/a/ambari-agent/ambari-agent_2.6.0.0-267_amd64.deb
  sudo mv ambari-agent_2.6.0.0-267_amd64.deb /var/cache/apt/archives/ambari-agent_2.6.0.0-267_amd64.deb
  sudo apt-get install ambari-agent
  sudo nano /etc/ambari-agent/conf/ambari-agent.ini
    hostname=bigdataserver0
    run_as_user=bigdata
  sudo service ambari-agent start

  ambari-server(master):

  wget http://192.168.11.72/ambari/ambari/ubuntu14/2.6.0.0-267/pool/main/a/ambari-server/ambari-server_2.6.0.0-267_amd64.deb
  sudo mv ambari-server_2.6.0.0-267_amd64.deb /var/cache/apt/archives/ambari-server_2.6.0.0-267_amd64.deb
  sudo apt-get install ambari-server
  sudo ambari-server setup
  sudo nano /etc/ambari-server/conf/ambari.properties
    ambari-server.user=bigdata
  sudo service ambari-server start

ambari admin

  http://192.168.11.72:8080
  ambari密码:   admin/admin
  postgres(default)
    Database admin user (postgres): postgres
    Database name (ambari): ambari
    Postgres schema (ambari): ambari
    Username (ambari): ambari
    Database Password (bigdata): bigdata

  mysql密码:ambari/bigdata
    Database admin user (root): root
    Database name (ambari): ambari
    Postgres schema (ambari): ambari
    Username (ambari): ambari
    Database Password (bigdata): bigdata


ambari config/monitor

    /etc/ambari-server/conf/ambari.properties
    /var/log/ambari-server/*.log

    /etc/ambari-agent/conf/ambari-agent.ini
    /var/log/ambari-agent/*.log

ambari private package repository setup

     local base_url: http://192.168.11.72/ambari/ambari/ubuntu14/2.6.0.0-267/
     local base_url: http://192.168.11.72/ambari/HDP/ubuntu14/2.6.3.0-235
     local base_url: http://192.168.11.72/ambari/HDP-UTILS

ambari errors

    1.报错信息:500 status code received on GET method for API: /api/v1/stacks/HDP/versions/2.4/recommendations
              Error message: Error occured during stack advisor command invocation: Cannot create /var/run/ambari-server/stack-recommendations
      解决方案:sudo chown -R bigdata /var/run/ambari-server

    2.报错信息:Could not get lock /var/lib/dpkg/lock - open
      解决方案:sudo rm /var/lib/dpkg/lock

    3.报错信息:Could not get lock /var/cache/apt/archives/lock - open
      解决方案:sudo rm /var/cache/apt/archives/lock

    4.报错信息:Host Checks found 47 issues on 3 hosts.
      解决方案:sudo python /usr/lib/python2.6/site-packages/ambari_agent/HostCleanup.py --silent --skip=users

    5.报错信息(superset):from _pg import *
                       ImportError: libpq.so.5: cannot open shared object file: No such file or directory
      解决方案:sudo apt-get install postgresql-client

    6.报错信息(hive):File does not exist: /user/admin
      解决方案:hadoop fs -mkdir /user/admin
              hadoop fs -chown admin:hadoop /user/admin
              or
              hdfs dfs -mkdir /user/admin
              hdfs dfs -chown admin:hadoop /user/admin

    7.报错信息(ambari):ERROR: Unexpected file/directory found in /usr/hdp: derby.log
      解决方案:sudo rm  /usr/hdp/derby.log

    8.cluster安装到一半失败
      sudo service ambari-server stop
      ambari-server reset
      sudo service ambari-server start

    9.Invalid smartsense ID unspecified. Please configure a vaid SmartSense ID to proceed
     解决方案: HORTONWORKS的一款增值服务产品,需要付费订阅后方可获得并使用.因此可关闭/卸载/删除

    10.终极解决方案:清除ambari重装
      agent端:
        ambari-agent stop
        sudo apt-get purge ambari-agent
        rm -rf /var/lib/ambari-agent
        rm -rf /var/run/ambari-agent
        rm -rf /usr/lib/amrbari-agent
        rm -rf /etc/ambari-agent
        rm -rf /var/log/ambari-agent
        rm -rf /usr/lib/python2.6/site-packages/ambari*

      server端:
        ambari-server stop
        ambari-server reset
        sudo apt-get purge ambari-server
        rm -rf /var/lib/ambari-server
        rm -rf /var/run/ambari-server
        rm -rf /usr/lib/amrbari-server
        rm -rf /etc/ambari-server
        rm -rf /var/log/ambari-server
        rm -rf /usr/lib/python2.6/site-packages/ambari*


    11.报错信息(ambari-agent restart):You can't perform this operation as non-sudoer user. Please, re-login or configure sudo access for this user
        sudo usermod -G sudo bigdata

 

ambari wui tool(web)

  http://192.168.11.72:8080

ambari cli tool(need download alone):

  ambari_shell --host localhost --port 8080 --user admin --password admin
  >show clusters

  java -jar ambari-shell/target/ambari-shell-1.3.0-SNAPSHOT.jar --ambari.server=localhost --ambari.port=8080 --ambari.user=admin --ambari.password=admin
  >host list

hdfs mount by ntp

   sudo apt-get install nfs-common
   showmount -e 192.168.11.72
   showmount -a 192.168.11.72
   sudo mkdir -p /mnt/hd/hdfs
   sudo mount -t nfs -o vers=3,proto=tcp,nolock 192.168.11.72:/ /mnt/hd/hdfs

marreduce commit

  java -jar target/BigData-MR-WordCount-0.0.1-SNAPSHOT-jar-with-dependencies.jar hdfs://192.168.11.73:8020/user/zhaomeng/examples.desktop hdfs://192.168.11.73:8020/user/zhaomeng/examples.desktop.out

spark1 shell

  sudo su hdfs
  spark-shell --master yarn --deploy-mode client

spark2 shell

  sudo su hdfs
  SPARK_MAJOR_VERSION=2 spark-shell ---master yarn --deploy-mode client

spark2 commit(should on master)

  SPARK_MAJOR_VERSION=2 HADOOP_USER_NAME=zhaomeng HADOOP_CONF_DIR=$SPARK_HOME/conf spark-submit --class "intellif.bigdata.sp.WordCount" --master yarn --deploy-mode client target/BigData-Spark-WordCount-0.0.1-SNAPSHOT-jar-with-dependencies.jar hdfs://192.168.11.73:8020/user/zhaomeng/examples.desktop hdfs://192.168.11.73:8020/tmp/examples.desktop.out


Ambari service/component version

hdp-select

 

 

转载于:https://my.oschina.net/igooglezm/blog/1578005

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值