超长超详细本地VMware搭建Centos.7.X+Ambari+HDP集群

目录

一、环境所需要的工具:

1.vmware 17.6

2.MobaXterm_Portable_v24.3_Preview2 (非必须,其他ssh也可以)

3.centos-7-x86_64-dvd-2009

4.Ambari2.7.5、HDP31.5、libtirpc-devel:

5.jdk-18.0.2.1

6.MySQL

7.python2.7

二、服务器配置

1.修改三台机器为静态IP

2.重启网络服务 查看IP

三、安装前环境检查

1.检查命令:(echo $LANG)

2.操作系统时区设置

3.操作系统时钟设置

4.所有主机 创建普通用户care

5.同步/etc/hosts 到集群中所有节点上

6.关闭防火墙

7.关闭SELinux 和配置 limits 参数

8.禁用交换分区

9.设置节点互信

10.开起vmware共享本地文件

11.vmtools安装灰色解决办法

12.JDK安装

四、配置Euler ambari+HDP  yum 源

1.镜像主机配置本地更新源

2.配置集群的yum os

五、Ambari 服务安装

1.Mysql 数据库安装

2.初始化 MySQL 

六、Ambari 安装配置

1.导入ambari初始化sql

2.启动AmbariServer

 七、Ambari 部署Hadoop 集群

1. 创建新集群

2.部署基础服务

3.安装时候提示报错 Requires: libtirpc-devel

4.启用 NameNode HA

5.安装YARN

6.启用Resourcemanager HA

7.安装 Hive

8.安装kafka

9.安装spark


一、环境所需要的工具:

1.vmware 17.6

迅雷下载链接:

https://downloads2.broadcom.com/?file=VMware-workstation-full-17.6.0-24238078.exe&oid=32016074&id=DBVo7d4XxUwC7FDdXhPlNLvYoSsOWheMEDDhJxfp2IpwiBw84mXh84wP9zA=&verify=1726962800-wsOfp9Td%2FCdLvV4m7ScpoN7lZHhvJydDRlu%2BHVlSIXg%3D

2.MobaXterm_Portable_v24.3_Preview2 (非必须,其他ssh也可以)

下载链接:

https://download.mobatek.net/2432024090110214/MobaXterm_Portable_v24.3_Preview2.zip

3.centos-7-x86_64-dvd-2009

下载链接:

Index of / (centos.org)

4.Ambari2.7.5、HDP31.5、libtirpc-devel:

下载链接可能失效,私信索取:

https://www.alipan.com/t/OME8pv22tHXcgNeJNZfq 

5.jdk-18.0.2.1

下载链接:

https://download.oracle.com/java/18/archive/jdk-18.0.2.1_linux-x64_bin.rpm

6.MySQL

下载链接:

https://downloads.mysql.com/archives/get/p/23/file/mysql-5.7.44-1.el7.x86_64.rpm-bundle.tar

7.python2.7

下载链接:

python-release-source安装包下载_开源镜像站-阿里云 (aliyun.com)

二、服务器配置

此步骤需要使用vmware安装三台虚拟机,修改hostname和静态ip ,网上教程很多,可自行搜索

hadoop101 192.168.131.128

hadoop102 192.168.131.129

hadoop103 192.168.131.130

MobaXterm链接三台虚拟机

打开软件后新建session,点击ssh 

输入ip 勾选Specify 后输入  username

​点击人头图标

创建连接的用户密码

接受

链接三台机器成功

1.修改三台机器为静态IP

vi /etc/sysconfig/network-scripts/ifcfg-ens160

hadoop101:

hadoop102:

hadoop103:

2.重启网络服务 查看IP

按照centos7的经验,修改ifcfg配置文件,重启network失败,报错“network.service not found.”

root@hadoop101 network-scripts]# systemctl restart network
Failed to restart network.service: Unit network.service not found.

欧拉使用NetworkManager工具管理网络,所以确实没有network服务,而且欧拉默认没有安装net-tools,不能使用ifconfig查看网络配置,需要单独安装。systemctl restart NetworkManager即可

[root@hadoop102 ~]# systemctl restart NetworkManager
[root@hadoop102 ~]# nmcli dev show
GENERAL.DEVICE:                         ens160
GENERAL.TYPE:                           ethernet
GENERAL.HWADDR:                         00:0C:29:D7:6C:C7
GENERAL.MTU:                            1500
GENERAL.STATE:                          100(已连接)
GENERAL.CONNECTION:                     ens160
GENERAL.CON-PATH:                       /org/freedesktop/NetworkManager/ActiveConnection/2
WIRED-PROPERTIES.CARRIER:               开
IP4.ADDRESS[1]:                         192.168.131.129/24
IP4.GATEWAY:                            192.168.131.2
IP4.ROUTE[1]:                           dst = 0.0.0.0/0, nh = 192.168.131.2, mt = 100
IP4.ROUTE[2]:                           dst = 192.168.131.0/24, nh = 0.0.0.0, mt = 100
IP4.ROUTE[3]:                           dst = 0.0.0.0/0, nh = 192.168.131.127, mt = 100
IP6.ADDRESS[1]:                         fe80::20c:29ff:fed7:6cc7/64
IP6.GATEWAY:                            --
IP6.ROUTE[1]:                           dst = fe80::/64, nh = ::, mt = 1024

GENERAL.DEVICE:                         lo
GENERAL.TYPE:                           loopback
GENERAL.HWADDR:                         00:00:00:00:00:00
GENERAL.MTU:                            65536
GENERAL.STATE:                          100(连接(外部))
GENERAL.CONNECTION:                     lo
GENERAL.CON-PATH:                       /org/freedesktop/NetworkManager/ActiveConnection/1
IP4.ADDRESS[1]:                         127.0.0.1/8
IP4.GATEWAY:                            --
IP6.ADDRESS[1]:                         ::1/128
IP6.GATEWAY:                            --

三、安装前环境检查

1.检查命令:(echo $LANG)

默认语言由中文切换为英文:(echo 'export LANG=en_US.UTF-8' >> ~/.bashrc) 重新登录(logout)

[root@hadoop101 network-scripts]# echo $LANG
zh_CN.UTF-8
[root@hadoop101 network-scripts]# echo 'export LANG=en_US.UTF-8' >> ~/.bas
[root@hadoop101 network-scripts]# logout
[root@hadoop101 ~]# echo $LANG
en_US.UTF-8
[root@hadoop101 ~]#

2.操作系统时区设置

操作系统时区统一设置为 ( Asia /Shanghai) , 时间采用 NTP 同步。让namenode 其中一台机器充当时间服务器的角色,其他所有的机器和它进行同步, 保证集群时间的一致性。

设置命令:(cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime)

3.操作系统时钟设置

检查是否安装 NTP 服务:(rpm –qa|grep ntp)安装 NTP 服务:(yum install ntp)

集群中所有节点主机的时间必须设置同步才能访问Ambari 的Web UI 界面。

所有主机都要执行安装

[root@hadoop101 ~]# rpm –qa|grep ntp
-bash: grep ntp: command not found
RPM version 4.18.2
Copyright (C) 1998-2002 - Red Hat, Inc.
This program may be freely redistributed under the terms of the GNU GPL

Usage: rpm [-afgpqlsiv?] [-a|--all] [-f|--file] [--path] [-g|--group]
        [-p|--package] [--pkgid] [--hdrid] [-q|--query] [--triggeredby]
        [--whatconflicts] [--whatrequires] [--whatobsoletes]
        [--whatprovides] [--whatrecommends] [--whatsuggests]
        [--whatsupplements] [--whatenhances] [--nomanifest]
        [-c|--configfiles] [-d|--docfiles] [-L|--licensefiles]
        [-A|--artifactfiles] [--noghost] [--noconfig] [--noartifact]
        [--dump] [-l|--list] [--queryformat=QUERYFORMAT] [-s|--state]
        [--nofiledigest] [--nofiles] [--nodeps] [--noscript] [--allfiles]
        [--allmatches] [--badreloc] [-e|--erase=<package>+] [--excludedocs                                                                               ]
        [--excludepath=<path>] [--force] [-F|--freshen=<packagefile>+]
        [-h|--hash] [--ignorearch] [--ignoreos] [--ignoresize] [--noverify                                                                               ]
        [-i|--install] [--justdb] [--nodb] [--nodeps] [--nofiledigest]
        [--nocontexts] [--nocaps] [--noorder] [--noscripts] [--notriggers]
        [--oldpackage] [--percent] [--prefix=<dir>] [--relocate=<old>=<new                                                                               >]
        [--replacefiles] [--replacepkgs] [--test]
        [-U|--upgrade=<packagefile>+] [--reinstall=<packagefile>+]
        [--restore=<packagefile>+] [-D|--define='MACRO EXPR']
        [--undefine=MACRO] [-E|--eval='EXPR'] [--target=CPU-VENDOR-OS]
        [--macros=<FILE:...>] [--load=<FILE>] [--noplugins] [--nodigest]
        [--nosignature] [--rcfile=<FILE:...>] [-r|--root=ROOT]
        [--dbpath=DIRECTORY] [--querytags] [--showrc] [--quiet]
        [-v|--verbose] [--version] [-?|--help] [--usage] [--scripts]
        [--conflicts] [--obsoletes] [--provides] [--requires]
        [--recommends] [--suggests] [--supplements] [--enhances] [--info]
        [--changelog] [--changes] [--xml] [--triggers] [--filetriggers]
        [--last] [--dupes] [--filesbypkg] [--fileclass] [--filecolor]
        [--fileprovide] [--filerequire] [--filecaps]
[root@hadoop101 ~]# yum install ntp
Last metadata expiration check: 0:27:24 ago on Sun 22 Sep 2024 09:06:03 AM CST.
Dependencies resolved.
=========================================================================================================================================================
 Package                               Architecture                    Version                                     Repository                       Size
=========================================================================================================================================================
Installing:
 ntp                                   x86_64                          4.2.8p17-3.oe2403                           OS                              621 k
Installing dependencies:
 autogen                               x86_64                          5.18.16-4.oe2403                            OS                              467 k
 gc                                    x86_64                          8.2.4-1.oe2403                              OS                              249 k
 guile                                 x86_64                          5:2.2.7-6.oe2403                            update                          6.3 M
 libtool-ltdl                          x86_64                          2.4.7-3.oe2403                              OS                               30 k
Installing weak dependencies:
 ntpstat                               noarch                          0.6-4.oe2403                                OS                               12 k

Transaction Summary
=========================================================================================================================================================
Install  6 Packages

Total download size: 7.7 M
Installed size: 48 M
Is this ok [y/N]: y
Downloading Packages:
(1/6): libtool-ltdl-2.4.7-3.oe2403.x86_64.rpm                                                                             24 kB/s |  30 kB     00:01
(2/6): gc-8.2.4-1.oe2403.x86_64.rpm                                                                                       95 kB/s | 249 kB     00:02
(3/6): ntpstat-0.6-4.oe2403.noarch.rpm                                                                                   104 kB/s |  12 kB     00:00
(4/6): autogen-5.18.16-4.oe2403.x86_64.rpm                                                                               110 kB/s | 467 kB     00:04
(5/6): ntp-4.2.8p17-3.oe2403.x86_64.rpm                                                                                   78 kB/s | 621 kB     00:07
(6/6): guile-2.2.7-6.oe2403.x86_64.rpm                                                                                   134 kB/s | 6.3 MB     00:48
---------------------------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                                    153 kB/s | 7.7 MB     00:51
retrieving repo key for OS unencrypted from http://repo.openeuler.org/openEuler-24.03-LTS/OS/x86_64/RPM-GPG-KEY-openEuler
OS                                                                                                                       2.6 kB/s | 3.0 kB     00:01
Importing GPG key 0xB675600B:
 Userid     : "openeuler <openeuler@compass-ci.com>"
 Fingerprint: 8AA1 6BF9 F2CA 5244 010D CA96 3B47 7C60 B675 600B
 From       : http://repo.openeuler.org/openEuler-24.03-LTS/OS/x86_64/RPM-GPG-KEY-openEuler
Is this ok [y/N]: y
Key imported successfully
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                                                                                                 1/1
  Installing       : gc-8.2.4-1.oe2403.x86_64                                                                                                        1/6
  Installing       : libtool-ltdl-2.4.7-3.oe2403.x86_64                                                                                              2/6
  Installing       : guile-5:2.2.7-6.oe2403.x86_64                                                                                                   3/6
  Installing       : autogen-5.18.16-4.oe2403.x86_64                                                                                                 4/6
  Installing       : ntpstat-0.6-4.oe2403.noarch                                                                                                     5/6
  Running scriptlet: ntp-4.2.8p17-3.oe2403.x86_64                                                                                                    6/6
  Installing       : ntp-4.2.8p17-3.oe2403.x86_64                                                                                                    6/6
  Running scriptlet: ntp-4.2.8p17-3.oe2403.x86_64                                                                                                    6/6
  Verifying        : autogen-5.18.16-4.oe2403.x86_64                                                                                                 1/6
  Verifying        : gc-8.2.4-1.oe2403.x86_64                                                                                                        2/6
  Verifying        : libtool-ltdl-2.4.7-3.oe2403.x86_64                                                                                              3/6
  Verifying        : ntp-4.2.8p17-3.oe2403.x86_64                                                                                                    4/6
  Verifying        : ntpstat-0.6-4.oe2403.noarch                                                                                                     5/6
  Verifying        : guile-5:2.2.7-6.oe2403.x86_64                                                                                                   6/6

Installed:
  autogen-5.18.16-4.oe2403.x86_64 gc-8.2.4-1.oe2403.x86_64 guile-5:2.2.7-6.oe2403.x86_64 libtool-ltdl-2.4.7-3.oe2403.x86_64 ntp-4.2.8p17-3.oe2403.x86_64
  ntpstat-0.6-4.oe2403.noarch

Complete!
[root@hadoop101 ~]#

集群内所有节点都要检查是否设置时钟同步开机自启。

systemctl is-enabled ntpd(是否开机自启)

systemctl enable ntpd(设置开机自启)

[root@hadoop101 ~]# systemctl is-enabled ntpd
disabled
[root@hadoop101 ~]# systemctl enable ntpd
Created symlink /etc/systemd/system/multi-user.target.wants/ntpd.service → /usr/lib/systemd/system/ntpd.service.

开启时钟同步

hadoop101:服务端

[root@hadoop101 ~]# cat /etc/ntp.conf
# For more information about this file, see the ntp.conf(5) man page.

# Record the frequency of the system clock.
driftfile /var/lib/ntp/drift

# Permit time synchronization with our time source, but do not
# permit the source to query or modify the service on this system.
restrict default  nomodify notrap  noepeer noquery

# Permit association with pool servers.

# Permit all access over the loopback interface.  This could
# be tightened as well, but to do so would effect some of
# the administrative functions.
restrict 192.168.131.128 nomodify notrap  noepeer noquery
restrict 127.0.0.1
restrict ::1

# Hosts on local network are less restricted.
restrict 192.168.131.127 mask 255.255.255.0 nomodify notrap

# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
# pool 2.openEuler.pool.ntp.org iburst
server 127.127.1.0
Fudge 127.127.1.0 stratum 10




[root@hadoop101 ~]#

hadoop102:客户端

[root@hadoop102 ~]# cat /etc/ntp.conf
# For more information about this file, see the ntp.conf(5) man page.

# Record the frequency of the system clock.
driftfile /var/lib/ntp/drift

# Permit time synchronization with our time source, but do not
# permit the source to query or modify the service on this system.
restrict default  nomodify notrap  noepeer noquery

# Permit association with pool servers.

# Permit all access over the loopback interface.  This could
# be tightened as well, but to do so would effect some of
# the administrative functions.
restrict 192.168.131.129 nomodify notrap  noepeer noquery
restrict 127.0.0.1
restrict ::1

# Hosts on local network are less restricted.
restrict 192.168.131.127 mask 255.255.255.0 nomodify notrap

# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
# pool 2.openEuler.pool.ntp.org iburst
server 192.168.131.128
Fudge 192.168.131.128 stratum 10

[root@hadoop102 ~]#

hadoop103:客户端

[root@hadoop103 ~]# cat /etc/ntp.conf
# For more information about this file, see the ntp.conf(5) man page.

# Record the frequency of the system clock.
driftfile /var/lib/ntp/drift

# Permit time synchronization with our time source, but do not
# permit the source to query or modify the service on this system.
restrict default  nomodify notrap  noepeer noquery

# Permit association with pool servers.

# Permit all access over the loopback interface.  This could
# be tightened as well, but to do so would effect some of
# the administrative functions.
restrict 192.168.131.130 nomodify notrap  noepeer noquery
restrict 127.0.0.1
restrict ::1

# Hosts on local network are less restricted.
restrict 192.168.131.127 mask 255.255.255.0 nomodify notrap

# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
# pool 2.openEuler.pool.ntp.org iburst
server 192.168.131.128
Fudge 192.168.131.128 stratum 10

[root@hadoop103 ~]#

所有机器重启ntp服务

systemctl restart ntpd.service

查看ntpd进程的状态

【命令】watch "ntpq -p"

【终止】按 Ctrl+C 停止查看进程。

第一列中的字符指示源的质量。星号 ( * ) 表示该源是当前引用。

remote:列出源的 IP 地址或主机名。

when:指出从轮询源开始已过去的时间(秒)。

poll:指出轮询间隔时间。该值会根据本地时钟的精度相应增加。

reach:是一个八进制数字,指出源的可存取性。值 377 表示源已应答了前八个连续轮询。

offset:是源时钟与本地时钟的时间差(毫秒)。

执行ntpstat会出现如下报错,

[root@hadoop101 ~]# ntpstat
unsynchronised
poll interval unknown

这种情况属于正常,ntp服务器配置完毕后,需要等待5-10分钟才能与/etc/ntp.conf中配置的标准时间进行同步。

等一段时间之后,再次使用ntpstat命令查看状态,就会变成如下正常结果:

[root@hadoop101 ~]# ntpstat
synchronised to local net at stratum 6
   time correct to within 11 ms
   polling server every 64 s

4.所有主机 创建普通用户care

,用于安装 ambari-server 及日常维护(root 用户)

  1. (groupadd care)
  2. (useradd -g care-d  /home/care care) //care分别对应用户目录、创建的用户
  3. (echo "care@123" | passwd --stdin care) //修改ocdp 密码
  4. 授 sudo 权限
[root@hadoop101 ~]# groupadd care
[root@hadoop101 ~]# useradd -g care -d /home/care care
[root@hadoop101 ~]# echo "care@123" | passwd --stdin care
Changing password for user care.
passwd: all authentication tokens updated successfully.
[root@hadoop101 ~]# cp /etc/sudoers /etc/sudoers_bak
[root@hadoop101 ~]# echo "care ALL=(ALL) NOPASSWD:ALL" >> /etc/sudoers
[root@hadoop101 ~]#

5.同步/etc/hosts 到集群中所有节点上

[root@hadoop101 ~]# vi /etc/hosts
[root@hadoop101 ~]# scp /etc/hosts root@192.168.131.129:/etc/hosts
The authenticity of host '192.168.131.129 (192.168.131.129)' can't be established.
ED25519 key fingerprint is SHA256:B8zQ+D4/7Xv/hHb5MnQQFT5h3P1JSH2efoMgoG09CS4.
This key is not known by any other names.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '192.168.131.129' (ED25519) to the list of known hosts.

Authorized users only. All activities may be monitored and reported.
root@192.168.131.129's password:
hosts                                                    

结果如下

6.关闭防火墙

[root@hadoop101 ~]# systemctl disable firewalld
Removed "/etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service".
Removed "/etc/systemd/system/multi-user.target.wants/firewalld.service".
[root@hadoop101 ~]# systemctl stop firewalld
[root@hadoop101 ~]#

7.关闭SELinux 和配置 limits 参数

永久禁止SELinux 自动启动,编辑/etc/selinux/config 文件, (vi /etc/selinux/config)

设置SELINUX=disabled

[root@hadoop101 ~]# cat /etc/selinux/config

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of these three values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected.
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted

集群所有主机设置umask,设置用户创建目录默认权限。(umask 0022)

(echo umask 0022 >> /etc/profile)

Hadoop会在同一时间使用很多的文件句柄.大多数linux系统使用的默认值1024 是不能满足的,(vi /etc/security/limits.conf) 在文件中添加下面的内容

8.禁用交换分区

群内所有主机

(sysctl vm.swappiness=0)

(echo vm.swappiness=0 >> /etc/sysctl.conf)

9.设置节点互信

集群每台主机,切换到care用户,执行如下命令,产生SSH 认证密钥对。

(ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa)

[root@hadoop101 ~]# su - care


Welcome to 6.6.0-28.0.0.34.oe2403.x86_64

System information as of time:  2024年 09月 22日 星期日 10:28:20 CST

System load:    0.00
Memory used:    18.0%
Swap used:      0%
Usage On:       6%
IP address:     192.168.131.128
Users online:   2
To run a command as administrator(user "root"),use "sudo <command>".
[care@hadoop101 ~]$ ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
Generating public/private rsa key pair.
Created directory '/home/care/.ssh'.
Your identification has been saved in /home/care/.ssh/id_rsa
Your public key has been saved in /home/care/.ssh/id_rsa.pub
The key fingerprint is:
SHA256:b7n0aS4w8FfroGOWSxPQ7UboMP5hrH0LWfC13rC5XeA care@hadoop101
The key's randomart image is:
+---[RSA 3072]----+
|                 |
|       . o       |
|      + + o .    |
|     ..* = ...   |
|      .oS =.o..  |
|       =+Boo.* . |
|      . B**o+ E .|
|       .**.+o+ . |
|       o.oo+= .  |
+----[SHA256]-----+
[care@hadoop101 ~]$

将 Ambari Server 生成的公钥文件 id_rsa.pub 拷贝至集群其他 Ambari Agent 主机,并加入Ambari Agent 的授权列表。

Ambari server 主机执行下面命令:

(cd ~/.ssh/)

(ssh-copy-id -i ~/.ssh/id_rsa.pub care@{target_host})

//{target_host} 包括hadoop 集群的所有主机,可以是主机名或 IP 地址

//该命令完成后会在{target.host}的~/.ssh/目录下生成文件 authorized_keys。

(chmod 700 ~/.ssh)

(chmod 600 ~/.ssh/authorized_keys)

验证:切换到care用户 执行ssh不在需要密码

ssh hadoop101 

ssh hadoop102

ssh hadoop103

[care@hadoop101 .ssh]$ ssh hadoop101

Authorized users only. All activities may be monitored and reported.

Authorized users only. All activities may be monitored and reported.
Last login: Sun Sep 22 10:37:21 2024 from 192.168.131.130


Welcome to 6.6.0-28.0.0.34.oe2403.x86_64

System information as of time:  2024年 09月 22日 星期日 10:38:07 CST

System load:    0.00
Memory used:    17.1%
Swap used:      0%
Usage On:       6%
IP address:     192.168.131.128
Users online:   3
To run a command as administrator(user "root"),use "sudo <command>".
[care@hadoop101 ~]$ ^C
[care@hadoop101 ~]$
注销
Connection to hadoop101 closed.
[care@hadoop101 .ssh]$ ssh hadoop102

Authorized users only. All activities may be monitored and reported.

Authorized users only. All activities may be monitored and reported.
Last login: Sun Sep 22 10:35:42 2024


Welcome to 6.6.0-28.0.0.34.oe2403.x86_64

System information as of time:  2024年 09月 22日 星期日 10:38:14 CST

System load:    0.00
Memory used:    17.1%
Swap used:      0%
Usage On:       4%
IP address:     192.168.131.129
Users online:   3
To run a command as administrator(user "root"),use "sudo <command>".
[care@hadoop102 ~]$
注销
Connection to hadoop102 closed.
[care@hadoop101 .ssh]$ ssh hadoop103

Authorized users only. All activities may be monitored and reported.

Authorized users only. All activities may be monitored and reported.
Last login: Sun Sep 22 10:35:42 2024


Welcome to 6.6.0-28.0.0.34.oe2403.x86_64

System information as of time:  2024年 09月 22日 星期日 10:38:17 CST

System load:    0.00
Memory used:    17.4%
Swap used:      0%
Usage On:       4%
IP address:     192.168.131.130
Users online:   3
To run a command as administrator(user "root"),use "sudo <command>".
[care@hadoop103 ~]$

10.开起vmware共享本地文件

我们很多虚拟机安装软件都存在本地电脑,所以需要先开启虚拟机共享本地文件目录

11.vmtools安装灰色解决办法

我们看到vm里面安装vmware tools是灰色的

我们先将hadoop101关机

然后选择编辑虚拟机设置,选择vm安装目录下的iso,注意不是系统的镜像,而是我们下载vmware安装目录下的linux.iso,然后重新启动

挂载一下磁盘,就可以看到压缩包了

[root@hadoop101 cdrom]# mkdir /mnt/cdrom
mkdir: cannot create directory ‘/mnt/cdrom’: File exists
[root@hadoop101 cdrom]# mount /dev/cdrom /mnt/cdrom
mount: /mnt/cdrom: WARNING: source write-protected, mounted read-only.
[root@hadoop101 cdrom]# cd /mnt/cdrom/
[root@hadoop101 cdrom]# ll
total 56849
-r-xr-xr-x 1 root root     1976 Mar 25  2020 manifest.txt
-r-xr-xr-x 1 root root     4943 Mar 25  2020 run_upgrader.sh
-r--r--r-- 1 root root 56414224 Mar 25  2020 VMwareTools-10.3.22-15902021.tar.gz
-r-xr-xr-x 1 root root   872044 Mar 25  2020 vmware-tools-upgrader-32
-r-xr-xr-x 1 root root   918184 Mar 25  2020 vmware-tools-upgrader-64
[root@hadoop101 cdrom]#

/tmp 目录下安装vmware tools

[root@hadoop101 cdrom]# cd /tmp
[root@hadoop101 tmp]# tar zxpf /mnt/cdrom/VMwareTools-10.3.22-15902021.tar.gz
[root@hadoop101 tmp]# ll
total 0
drwx------ 3 root root  60 Sep 22 11:17 systemd-private-db49186f2824451fa2b806433924aaae-chronyd.service-DiBzMQ
drwx------ 3 root root  60 Sep 22 11:17 systemd-private-db49186f2824451fa2b806433924aaae-polkit.service-97OMoi
drwx------ 3 root root  60 Sep 22 11:17 systemd-private-db49186f2824451fa2b806433924aaae-systemd-logind.service-pQgLsc
drwxr-xr-x 9 root root 240 Mar 25  2020 vmware-tools-distrib
[root@hadoop101 tmp]# cd vmware-tools-distrib/
[root@hadoop101 vmware-tools-distrib]# ll
total 372
drwxr-xr-x  2 root root    100 Mar 25  2020 bin
drwxr-xr-x  5 root root    100 Mar 25  2020 caf
drwxr-xr-x  2 root root    100 Mar 25  2020 doc
drwxr-xr-x  5 root root    400 Mar 25  2020 etc
-rw-r--r--  1 root root 146996 Mar 25  2020 FILES
-rw-r--r--  1 root root   2538 Mar 25  2020 INSTALL
drwxr-xr-x  2 root root    120 Mar 25  2020 installer
drwxr-xr-x 14 root root    280 Mar 25  2020 lib
drwxr-xr-x  3 root root     60 Mar 25  2020 vgauth
-rwxr-xr-x  1 root root 227024 Mar 25  2020 vmware-install.pl
[root@hadoop101 vmware-tools-distrib]# ./vmware-install.pl
Creating a new VMware Tools installer database using the tar4 format.

Installing VMware Tools.

In which directory do you want to install the binary files?
[/usr/bin]

./vmware-install.pl这部没有个性需求一路回车就行了,如下就是安装完毕。

Generating the key and certificate files.
Successfully generated the key and certificate files.
The configuration of VMware Tools 10.3.22 build-15902021 for Linux for this
running kernel completed successfully.

You must restart your X session before any mouse or graphics changes take
effect.

To enable advanced X features (e.g., guest resolution fit, drag and drop, and
file and text copy/paste), you will need to do one (or more) of the following:
1. Manually start /usr/bin/vmware-user
2. Log out and log back into your desktop session
3. Restart your X session.

Enjoy,

--the VMware team

 此目录就是我们共享的目录

[root@hadoop101 BigData]# pwd
/mnt/hgfs/BigData
[root@hadoop101 BigData]# ll
total 5170081
drwxrwxrwx 1 root root       4096 Sep 22 08:43 ambari
-rwxrwxrwx 1 root root 1071542888 Sep 17 08:28 ideaIU-2024.2.1.exe
drwxrwxrwx 1 root root          0 Sep 17 08:35 idea激活
-rwxrwxrwx 1 root root 4222615552 Sep 14 18:10 openEuler-24.03-LTS-x86_64-dvd.iso
[root@hadoop101 BigData]#

 可以看到跟我们共享的本地是一致的

12.JDK安装

推荐rpm包安装,下载jdk后放出共享目录

[root@hadoop101 BigData]# rpm -i jdk-18.0.2.1_linux-x64_bin.rpm

编辑./etc/profile 添加JAVA_HOME

jdk安装成功

[root@hadoop101 BigData]# java -version
java version "18.0.2.1" 2022-08-18
Java(TM) SE Runtime Environment (build 18.0.2.1+1-1)
Java HotSpot(TM) 64-Bit Server VM (build 18.0.2.1+1-1, mixed mode, sharing)
[root@hadoop101 BigData]#

四、配置Euler ambari+HDP  yum 源

选取hadoop101作为 yum 源镜像主机,在镜像源主机上配置本地 yum 源用于更新系统软件。

系统镜像拷贝至/usr/local/cenots 目录

挂载mount -o loop /usr/local/centos/openEuler-24.03-LTS-x86_64-dvd.iso /var/www/html/centos

1.镜像主机配置本地更新源

安装 HTTP 服务

  yum install httpd

(vi /etc/httpd/conf/httpd.conf)

  1. 修改 DocumentRoot 为"/var/www/html"

  2. 修改 <Directory "/var/www/html">节:添加Options Indexes FollowSymLinks

  3. 修改“#ServerName  www.example.com:80”去掉“#,改为  ServerName localhost

     4.删除默认页面:

(rm -f /etc/httpd/conf.d/welcome.conf /var/www/error/noindex.html)

[root@hadoop101 BigData]# mkdir -p /var/www/html/centos
[root@hadoop101 BigData]# mkdir -p /usr/local/centos
[root@hadoop101 BigData]# cp /mnt/hgfs/BigData/
ambari/                             idea激活/                           openEuler-24.03-LTS-x86_64-dvd.iso
ideaIU-2024.2.1.exe                 jdk-18.0.2.1_linux-x64_bin.rpm
[root@hadoop101 BigData]# cp /mnt/hgfs/BigData/openEuler-24.03-LTS-x86_64-dvd.iso /usr/local/centos
[root@hadoop101 BigData]# mount -o loop /usr/local/centos/openEuler-24.03-LTS-x86_64-dvd.iso /var/www/html/centos
mount: /var/www/html/centos: WARNING: source write-protected, mounted read-only.
[root@hadoop101 BigData]# cd /etc/yum.repos.d
[root@hadoop101 yum.repos.d]# mkdir -p /etc/yum.repos.d/bak
[root@hadoop101 yum.repos.d]# mv *.repo bak
[root@hadoop101 yum.repos.d]# vi /etc/yum.repos.d/euler.repo
[root@hadoop101 yum.repos.d]# rpm -qa|grep http
libnghttp2-1.58.0-2.oe2403.x86_64
httpd-filesystem-2.4.58-6.oe2403.noarch
openEuler-logos-httpd-1.0-9.oe2403.noarch
httpd-tools-2.4.58-6.oe2403.x86_64
mod_http2-2.0.25-3.oe2403.x86_64
httpd-2.4.58-6.oe2403.x86_64
[root@hadoop101 yum.repos.d]# vi /etc/httpd/conf/httpd.conf
[root@hadoop101 yum.repos.d]# rm -f /etc/httpd/conf.d/welcome.conf /var/www/error/noindex.html
[root@hadoop101 yum.repos.d]# systemctl enable httpd
Created symlink /etc/systemd/system/multi-user.target.wants/httpd.service → /usr/lib/systemd/system/httpd.service.
[root@hadoop101 yum.repos.d]# systemctl restart httpd
[root@hadoop101 yum.repos.d]#

 设置 HTTP 开机自动重启

systemctl enable httpd

systemctl restart httpd

本地浏览器访问192.168.131.128 

如果出现如下问题 代表目录权限不够 ,chmod -R 755 /var/www/html/ 即可解决

正常访问状态

镜像的yum至此完毕

2.配置集群的yum os

解压这4个tar压缩包

解压

  tar -zxvf ambari-2.7.5.0-centos7.tar.gz -C /var/www/html/ambari
  tar -zxvf HDP-UTILS-1.1.0.22-centos7.tar.gz -C /var/www/html/hdp-utils
  tar -zxvf HDP-GPL-3.1.5.0-centos7-gpl.tar.gz -C /var/www/html/hdp-gpl/
  tar -zxvf HDP-3.1.5.0-centos7-rpm.tar.gz -C  /var/www/html/hdp

添加repo

ambari.repo

[root@hadoop101 yum.repos.d]# cat ambari.repo
[ambari-2.7.5.0]

name=ambari-2.7.5.0

baseurl=http://192.168.131.128/ambari/ambari/centos7/2.7.5.0-72

path=/

gpgcheck=0

gpgkey=http://192.168.131.128/ambari/ambari/centos7/2.7.5.0-72/RPM-GPG-KEY/RPM-GPG-KEY-Jenkins

enabled=1

priority=1

 euler.repo

[root@hadoop101 yum.repos.d]# cat Centos.repo
[Centos]

name=Centos

baseurl= http://192.168.131.128/centos

gpgkey= http://192.168.131.128/centos/RPM-GPG-KEY-Centos-7

gpgcheck=1

enabled=1

hdp.repo

[root@hadoop101 yum.repos.d]# cat hdp.repo
[HDP-3.1]

name=HDP-3.1

baseurl=http://192.168.131.128/hdp/HDP/centos7/3.1.5.0-152/

path=/

enabled=1

gpgcheck=0

[HDP-UTILS-1.1.0.22]

name=HDP-UTILS-1.1.0.22

baseurl=http://192.168.131.128/hdp-utils/HDP-UTILS/centos7/1.1.0.22/

path=/

enabled=1

gpgcheck=0


[HDP-3.1-GPL]

name=HDP-3.1-GPL

baseurl=http://192.168.131.128/hdp-gpl/HDP-GPL/centos7/3.1.5.0-152/

path=/

enabled=1

gpgcheck=0

拷贝镜像主机/etc/yum.repo.d/下的 ambari.repo 文件,拷贝至gadoop102 hadoop103主机的

/etc/yum.repo.d/目录下。镜像主机下进行拷贝操作

清空repo 缓存

[root@hadoop101 yum.repos.d]# scp ambari.repo root@hadoop103:/etc/yum.repos.d/

Authorized users only. All activities may be monitored and reported.
root@hadoop103's password:
ambari.repo                                                                                                            100%  244   549.5KB/s   00:00
[root@hadoop101 yum.repos.d]# yum clean all
0 files removed
[root@hadoop101 yum.repos.d]#

刷新浏览器页面

五、Ambari 服务安装

1.Mysql 数据库安装

建议使用rpm 的方式安装,主主同步方式最佳。

查询是否有旧版本

[root@hadoop101 yum.repos.d]# pm -qa|grep mysql
-bash: pm: command not found
[root@hadoop101 yum.repos.d]# pm -qa|grep mariadb
-bash: pm: command not found

卸载后开始安装mysql 本实验安装在hadoop101部署单点

解压下载好的安装包

依赖关系依次为common→libs→client→server

[root@hadoop101 mysql]# rpm -ivh mysql-community-common-5.7.44-1.el7.x86_64.rpm
warning: mysql-community-common-5.7.44-1.el7.x86_64.rpm: Header V4 RSA/SHA256 Signature, key ID 3a79bd29: NOKEY
Preparing...                          ################################# [100%]
Updating / installing...
   1:mysql-community-common-5.7.44-1.e################################# [100%]
[root@hadoop101 mysql]# rpm -ivh mysql-community-libs-5.7.44-1.el7.x86_64.rpm
warning: mysql-community-libs-5.7.44-1.el7.x86_64.rpm: Header V4 RSA/SHA256 Signature, key ID 3a79bd29: NOKEY
Preparing...                          ################################# [100%]
Updating / installing...
   1:mysql-community-libs-5.7.44-1.el7################################# [100%]
[root@hadoop101 mysql]# rpm -ivh mysql-community-client-5.7.44-1.el7.x86_64.rpm
warning: mysql-community-client-5.7.44-1.el7.x86_64.rpm: Header V4 RSA/SHA256 Signature, key ID 3a79bd29: NOKEY
Preparing...                          ################################# [100%]
Updating / installing...
   1:mysql-community-client-5.7.44-1.e################################# [100%]
[root@hadoop101 mysql]# rpm -ivh mysql-community-server-5.7.44-1.el7.x86_64.rpm
warning: mysql-community-server-5.7.44-1.el7.x86_64.rpm: Header V4 RSA/SHA256 Signature, key ID 3a79bd29: NOKEY
Preparing...                          ################################# [100%]
Updating / installing...
   1:mysql-community-server-5.7.44-1.e################################# [100%]

2.初始化 MySQL 

启动mysql 

systemctl start mysqld

如果启动不成功,出现如下报错,需要安装compat-openssl11-1.1.1k-4.0.1.el9_0.x86_64.rpm

下载链接:https://yum.oracle.com/repo/OracleLinux/OL9/appstream/x86_64/getPackage/compat-openssl11-1.1.1k-4.0.1.el9_0.x86_64.rpm

 至此可以判断 MySQL 基本安装成功了

1.找到临时登录密码

(grep password /var/log/mysqld.log)

[root@hadoop101 BigData]# grep password /var/log/mysqld.log
2024-09-22T08:36:39.757410Z 6 [Note] [MY-010454] [Server] A temporary password is generated for root@localhost: %F9dgdoc4b+M
[root@hadoop101 BigData]#

登录mysql 修改临时密码 

mysql -uroot -p 

输入上面的临时密码

set global validate_password_policy=0;

set global validate_password_length=1;

ALTER USER 'root'@'localhost' IDENTIFIED BY 'care';
2.增加 root 远程登录

(mysql> CREATE USER 'root'@'%' IDENTIFIED BY 'care';)

(mysql>GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' WITH GRANT OPTION;)

(mysql>FLUSH PRIVILEGES;)

3.创建 ambari 数据库并配置权限

(mysql>create database ambari;)

(mysql> use ambari;)

(mysql>CREATE USER 'ambari'@'%' IDENTIFIED BY 'care'; )

(mysql>GRANT ALL PRIVILEGES ON *.* TO 'ambari'@'%'; )

(mysql>CREATE USER 'ambari'@'localhost' IDENTIFIED BY 'care'; )

(mysql>GRANT ALL PRIVILEGES ON *.* TO 'ambari'@'localhost'; )

(mysql>CREATE USER 'ambari'@'hadoop101' IDENTIFIED BY 'care'; )

(mysql>GRANT ALL PRIVILEGES ON *.* TO 'ambari'@'hadoop101'; )

(mysql>FLUSH PRIVILEGES; )

六、Ambari 安装配置

以下操作使用care用户

sudo yum install ambari-server

[care@hadoop101 ~]$ sudo yum install ambari-server
Last metadata expiration check: 0:15:33 ago on 2024年09月22日 星期日 17时05分52秒.
Error:
 Problem: 冲突的请求
  - nothing provides postgresql-server >= 8.1 needed by ambari-server-2.7.5.0-72.x86_64 from ambari-2.7.5.0
(try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)

我这里报错了,因为我本地的yum源里面没有postgre.需要下载三个安装包后再次尝试

下载地址:Index of /pub/repos/yum/12/redhat/rhel-8-x86_64/ (postgresql.org)

安装报错的话 加上 --nodeps

[care@hadoop101 BigData]$ ll postgre*
-rwxrwxrwx 1 root root 1742040  9月22日 17:26 postgresql12-12.10-1PGDG.rhel8.x86_64.rpm
-rwxrwxrwx 1 root root  409856  9月22日 17:24 postgresql12-libs-12.10-1PGDG.rhel8.x86_64.rpm
-rwxrwxrwx 1 root root 5477768  9月22日 17:24 postgresql12-server-12.10-1PGDG.rhel8.x86_64.rpm
[care@hadoop101 BigData]$

 成功安装ambari-server


[care@hadoop101 ~]$ sudo yum install ambari-server
Last metadata expiration check: 0:36:19 ago on 2024年09月22日 星期日 17时05分52秒.
Dependencies resolved.
====================================================================================================================================================================================
 Package                                               Architecture                     Version                                      Repository                                Size
====================================================================================================================================================================================
Installing:
 ambari-server                                         x86_64                           2.7.5.0-72                                   ambari-2.7.5.0                           373 M
Installing dependencies:
 python3-unversioned-command                           x86_64                           3.11.6-2.oe2403                              Euler                                    5.8 k

Transaction Summary
====================================================================================================================================================================================
Install  2 Packages

Total download size: 373 M
Installed size: 439 M
Is this ok [y/N]: y
Downloading Packages:
(1/2): python3-unversioned-command-3.11.6-2.oe2403.x86_64.rpm                                                                                       1.4 MB/s | 5.8 kB     00:00
(2/2): ambari-server-2.7.5.0-72.x86_64.rpm                                                                                                          139 MB/s | 373 MB     00:02
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                                                               139 MB/s | 373 MB     00:02
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                                                                                                                            1/1
  Installing       : python3-unversioned-command-3.11.6-2.oe2403.x86_64                                                                                                         1/2
  Running scriptlet: ambari-server-2.7.5.0-72.x86_64                                                                                                                            2/2
  Installing       : ambari-server-2.7.5.0-72.x86_64                                                                                                                            2/2
  Running scriptlet: ambari-server-2.7.5.0-72.x86_64                                                                                                                            2/2
Cannot detect Python for Ambari to use. Please manually set /usr/bin/ambari-python-wrap link to point to correct Python binary

  Verifying        : ambari-server-2.7.5.0-72.x86_64                                                                                                                            1/2
  Verifying        : python3-unversioned-command-3.11.6-2.oe2403.x86_64                                                                                                         2/2

Installed:
  ambari-server-2.7.5.0-72.x86_64                                                 python3-unversioned-command-3.11.6-2.oe2403.x86_64

Complete!

初始化Ambari 数据库连接配置 

(sudo ambari-server setup --jdbc-db=mysql --jdbc- driver=/usr/share/java/mysql-connector-java.jar)

初始化成功(下载的connector-java*.jar,要改名为 mysql-connector-java.jar,放到/usr/share/java/目录)

因为ambari需要python2环境Euler默认只有python3,在openEuler上编译一个 python2.x

# 安装编译工具
yum -y install make  gcc gcc-c++

# 编译安装python2
 wget  https://www.python.org/ftp/python/2.7.18/Python-2.7.18.tar.xz
 tar -Jxf Python-2.7.18.tar.xz
 cd Python-2.7.18
 ./configure --enable-shared --prefix=/usr/local/python-2.7.18 && make -j 8 && make install

# 设置软连接,占用python命令
 ln -s  /usr/local/python-2.7.18/bin/python   /usr/bin/python

# 设置库环境变量
 echo /usr/local/python-2.7.18/lib > /etc/ld.so.conf.d/py2.conf
 ldconfig

# 安装pip2 包管理器
/usr/local/python-2.7.18/bin/python -m ensurepip
[root@hadoop101 mysql]#  ambari-server setup --jdbc-db=mysql --jdbc-driver=/usr/share/java/mysql-connector-java.jar
Using python  /usr/bin/python
Setup ambari-server
Copying /usr/share/java/mysql-connector-java.jar to /var/lib/ambari-server/resources/mysql-connector-java.jar
If you are updating existing jdbc driver jar for mysql with mysql-connector-java.jar. Please remove the old driver jar, from all hosts. Restarting services that need the driver, will automatically copy the new jar to the hosts.
JDBC driver was successfully initialized.
Ambari Server 'setup' completed successfully.

ambari安装完毕

(sudo ambari-server setup) 接下来按照提示进行安装设置。

[care@hadoop101 ~]$ sudo ambari-server setup
Using python  /usr/bin/python
Setup ambari-server
Checking SELinux...
SELinux status is 'enabled'
SELinux mode is 'permissive'
WARNING: SELinux is set to 'permissive' mode and temporarily disabled.
OK to continue [y/n] (y)? y
Customize user account for ambari-server daemon [y/n] (n)? y
Enter user account for ambari-server daemon (root):care
Adjusting ambari-server permissions and ownership...
Checking firewall status...
Checking JDK...
[1] Oracle JDK 1.8 + Java Cryptography Extension (JCE) Policy Files 8
[2] Custom JDK
==============================================================================
Enter choice (1): 2
WARNING: JDK must be installed on all hosts and JAVA_HOME must be valid on all hosts.
WARNING: JCE Policy files are required for configuring Kerberos security. If you plan to use Kerberos,please make sure JCE Unlimited Strength Jurisdiction Policy Files are valid on all hosts.
Path to JAVA_HOME: /usr/java/jdk1.8.0-x64
Validating JDK on Ambari Server...done.
Check JDK version for Ambari Server...
JDK version found: 8
Minimum JDK version is 8 for Ambari. Skipping to setup different JDK for Ambari Server.
Checking GPL software agreement...
GPL License for LZO: https://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html
Enable Ambari Server to download and install GPL Licensed LZO packages [y/n] (n)? y
Completing setup...
Configuring database...
Enter advanced database configuration [y/n] (n)? y
Configuring database...
==============================================================================
Choose one of the following options:
[1] - PostgreSQL (Embedded)
[2] - Oracle
[3] - MySQL / MariaDB
[4] - PostgreSQL
[5] - Microsoft SQL Server (Tech Preview)
[6] - SQL Anywhere
[7] - BDB
==============================================================================
Enter choice (1): 3
Hostname (localhost):
Port (3306):
Database name (ambari):
Username (ambari):
Enter Database Password (bigdata):
Re-enter password:
Configuring ambari database...
Should ambari use existing default jdbc /usr/share/java/mysql-connector-java.jar [y/n] (y)? y
Configuring remote database connection properties...
WARNING: Before starting Ambari Server, you must run the following DDL directly from the database shell to create the schema: /var/lib/ambari-server/resources/Ambari-DDL-MySQL-CREATE.sql
Proceed with configuring remote database connection properties [y/n] (y)? y
Extracting system views...
ambari-admin-2.7.5.0.72.jar
....
Ambari repo file doesn't contain latest json url, skipping repoinfos modification
Adjusting ambari-server permissions and ownership...
Ambari Server 'setup' completed successfully.

 如果出现以下错误

REASON: Downloading or installing JDK failed: "Fatal exception: Running java version check command failed: invalid literal for int() with base 10: '2 2022-08-18\\n'. Exiting., exit code 1". Exiting.

代表jdk过期了,需要更新最新的jdk8 

下载链接:Java Archive Downloads - Java SE 8u211 and later (oracle.com)

[care@hadoop101 ~]$ sudo ambari-server setup
Using python  /usr/bin/python
Setup ambari-server
Checking SELinux...
SELinux status is 'enabled'
SELinux mode is 'permissive'
WARNING: SELinux is set to 'permissive' mode and temporarily disabled.
OK to continue [y/n] (y)? y
Customize user account for ambari-server daemon [y/n] (n)? y
Enter user account for ambari-server daemon (root):care
Adjusting ambari-server permissions and ownership...
Checking firewall status...
Checking JDK...
Do you want to change Oracle JDK [y/n] (n)? y
[1] Oracle JDK 1.8 + Java Cryptography Extension (JCE) Policy Files 8
[2] Custom JDK
==============================================================================
Enter choice (1): 2
WARNING: JDK must be installed on all hosts and JAVA_HOME must be valid on all hosts.
WARNING: JCE Policy files are required for configuring Kerberos security. If you plan to use Kerberos,please make sure JCE Unlimited Strength Jurisdiction Policy Files are valid on all hosts.
Path to JAVA_HOME: /usr/java/jdk-18.0.2.1/
Validating JDK on Ambari Server...done.
Check JDK version for Ambari Server...
ERROR: Exiting with exit code 1.
REASON: Downloading or installing JDK failed: "Fatal exception: Running java version check command failed: invalid literal for int() with base 10: '2 2022-08-18\\n'. Exiting., exit code 1". Exiting.
[care@hadoop101 ~]$ java -version
java version "18.0.2.1" 2022-08-18
Java(TM) SE Runtime Environment (build 18.0.2.1+1-1)
Java HotSpot(TM) 64-Bit Server VM (build 18.0.2.1+1-1, mixed mode, sharing)

1.导入ambari初始化sql

[root@hadoop101 mysql]# mysql -uroot -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 7
Server version: 5.7.44 MySQL Community Server (GPL)

Copyright (c) 2000, 2023, Oracle and/or its affiliates.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> use ambari;
Database changed
mysql> source  /var/lib/ambari-server/resources/Ambari-DDL-MySQL-CREATE.sql
Query OK, 0 rows affected (0.00 sec)

至此,Ambari 安装完成。

2.启动AmbariServer

(sudo ambari-server start)

[root@hadoop101 ~]# ambari-server start
Using python  /usr/bin/python
Starting ambari-server
Ambari Server running with administrator privileges.
Organizing resource files at /var/lib/ambari-server/resources...
Ambari database consistency check started...
Server PID at: /var/run/ambari-server/ambari-server.pid
Server out at: /var/log/ambari-server/ambari-server.out
Server log at: /var/log/ambari-server/ambari-server.log
Waiting for server start.........................
Server started listening on 8080

DB configs consistency check: no errors and warnings were found.
Ambari Server 'start' completed successfully.

 七、Ambari 部署Hadoop 集群

 启动 ambari-server 后,通过客户端浏览器登录Ambari 的界面 

192.168.131.128:8080

账户密码admin/admin

1. 创建新集群

 

选择HDP版本,这里的OS url和我们配置的repo保持一致

主机把三台的hostname写上  SSH Private Key是care用户下.ssh下面的id_rsa

​ 

[care@hadoop101 .ssh]$ cat id_rsa
-----BEGIN RSA PRIVATE KEY-----
MIIEpAIBAAKCAQEA5d1hp+g+BZv4pR9WyuJqNX8EvdVauTgRsGgHNsXs9q1b+V6r
BA52ppAIOREaEqw3s2BSW+CxHy5vPzMX1yIzI630I0ArZVKS3jwfMZdL6kMgh5Uv
T3BwxmbLCXsAcyhF2oRe21Fv3qqigqcE6Xg44fufpebSmQQI/H9K1BVEan0bY5Ub
RxQT3n3N7Xi3L+5qAUipdNkLobpg5i1m4t9RznCqPyaIz1SIWE65yfJOdekJ3V01
aGIKRuhUwrzBKjHvmqjDiTrJZRw/6E7XDtRPRbf2enFx7lLEKhPE+bejkaexksdh
8ZOyZpvbxR3+5FjjzeEFWvknBt+tJkSU+HG41QIDAQABAoIBAQC43keAJwRatopP
ItlG6rnItJM3qbQBatqvKbtDjgN6kQp7kGuyI3/Bje1PGDYD9oYFud4DDr7k+Q93
oLv3xgWjGHBVOXKtVq/QFEJyO+BOVBaBdLZMCX5p0ppQ0aAW/bjQec1gTirOxiVV
NsZ4jrwQ47IOV4ngjqI7kJS55TDVAqsa82njGCZ52uysxZjDmraHMvKaTNMyRJGK
N6g3jtck0N5uzIkENtNAVAtj+VXDV0jpz0lYKlc+P7sRElqCYzHCI7a0b4FSSLW1
32aEwdqbKeP/BSW9vephhomZyUDxTzSOh/fTm8nsaUkMkW9uemb1VBxIVDfuICK/
Ei+xS2sxAoGBAPZDvF9WAtX9tMTefwTyVrMgFzoDODMuS8jUWIV9hcd/G7mxwMKm
uJy9OWWToZK9sr1fOLQzyZCUBy1AlpxytYRsL/670iMlkB52rd7gaZRa73asonWa
wzhrgAaW5DYVKpdetORPsHA5dbWrHcYdzb9SU9J2PVD9R/SRYs4myUKfAoGBAO7z
rM3Im9nzgcMPx8JmEDH3ANtq3sYngmRR5gr0rFQRtO+fFPSRZ0ESoD0wrYMQmnyX
NBcEDA3fJBiadze2qWmo4Q2UAiPNJY1LXrz77BpXa8ZDnPSUgm54L48O8Td27n5J
da13NOztHm3M8CynLZwyrrQz5kjYJgt24YK6IqQLAoGAaIHuWvcBVRbJtBJIDS1a
pcGkmbXsD6xB9QRIXL4cG8FRXsiUaQafqcSTqwuvsbpXNA5I3hBsJbLsKMQUJmh3
p67R32SNlOTH+GWc+8x4gcDlhpNUjlwTJMpaFnHKfzkUThCe65T152o7DdGEXSMg
wWSKtfH/q3MRKjTYnWvQVTkCgYEAt3ofy/snwJj7oF2zkw9vjA4PeGt9F0YrFwDT
1MG+uObHud665nfngs3cgF+qO6M6HES12J5g6x3Vx5aDyCHXv6vO8vAdHIRfOzkO
S6pcxnUt6hTspdiKtmxOiFh+24nU4t9hHosT9oC0BreAC6lqmi9IelIHlxNxUwg7
bHekNbUCgYA7lMyNew9K8tTQgAFGfPOfOViiEz4PP3kEkgmy50ZVYWwSOzLX+rIE
QVtvdDxxelKfdVRNJ/tfIw8B1GzXK+3yCYNs2KWC3D5KCHncA3KCFp7jhcQBafvq
8/5/FI2tOYtLfS6NhE748pGZkiuP2sksX4mo/GIIsqeAdd1cniIOIg==
-----END RSA PRIVATE KEY-----

确认主机,安装 ambari-agent,并检测主机,保证检查结果没问题,点击next

2.部署基础服务

先安装HDFS、ZOOKEEPER、Ambari Metrics

需要注意:

Datanode directories 只能配置数据盘对应的目录,配置系统盘目录容易将系统盘写满,导致服务挂起。

NameNode directories 配置的目录数量控制在 2-3 个,可以配置为系统盘较大的目录

我们这里为实验环境,所以目录只用一块盘

这里我们为了日后方便管理,全使用care用户

​ 

3.安装时候提示报错 Requires: libtirpc-devel

处理办法安装libtirpc-0.2.4-0.16.el7.x86_64.rpm libtirpc-devel-0.2.4-0.16.el7.x86_64.rpm 这两个包即可

retry按钮重试安装,静静等待

基本服务安装好了 这里有个警告,是因为我们密码设置太短了,按照下图修改后重启Grafana

resource_management.core.exceptions.Fail: Ambari Metrics Grafana password creation failed. PUT request status: 400 Bad Request 
{"message":"New password is too short"} 

删除 smartsense 删除之前先停止服务

4.启用 NameNode HA

按照上图去hadoop101上执行


libtirpc-0.2.4-0.16.el7.x86_64.rpm                                                                                                                                  100%   89KB  22.5MB/s   00:00
[care@hadoop101 ambari]$ sudo su care -l -c 'hdfs dfsadmin -safemode enter'
Safe mode is ON
[care@hadoop101 ambari]$ sudo su care -l -c 'hdfs dfsadmin -saveNamespace'
Save namespace successful

执行成功后返回页面点击next 

回到hadoop101上执行

这里面有在hadoop101 上执行的,有在hadoop102上执行的,注意不要错

至此namenode HA 配置完毕

5.安装YARN

6.启用Resourcemanager HA

7.安装 Hive

pig可以不选,现在不主流了

mysql> create database hive default character set='utf8';
Query OK, 1 row affected (0.00 sec)

mysql> use hive;
Database changed
mysql> CREATE USER 'hive'@'localhost' IDENTIFIED BY 'care';
Query OK, 0 rows affected (0.01 sec)

mysql> GRANT ALL PRIVILEGES ON hive.* TO 'hive'@'localhost' IDENTIFIED BY 'care' with grant option;
Query OK, 0 rows affected, 1 warning (0.00 sec)

mysql> CREATE USER 'hive'@'%' IDENTIFIED BY 'care';
Query OK, 0 rows affected (0.00 sec)

mysql> GRANT ALL PRIVILEGES ON hive.* TO 'hive'@'%' IDENTIFIED BY 'care' with grant option;
Query OK, 0 rows affected, 1 warning (0.00 sec)

mysql> CREATE USER 'hive'@'hadoop101' IDENTIFIED BY 'care';
Query OK, 0 rows affected (0.00 sec)

mysql>  GRANT ALL PRIVILEGES ON hive.* TO 'hive'@'hadoop101' IDENTIFIED BY 'care' with grant option;
Query OK, 0 rows affected, 1 warning (0.00 sec)

mysql> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)


注意:配置hive的元数据库时链接上强烈建议加上utf8字符限制

8.安装kafka

9.安装spark

         

报错1:

bin/spark-submit --class org.apache.spark.examples.SparkPi --master yarn  --executor-memory 471859200  --total-executor-cores 1 examples/jars/spark-examples_2.11-2.3.2.3.1.5.0-152.jar 3

提交spark样例报错是因为yarn受到虚拟机分配资源的限制

24/09/25 10:52:22 INFO Client: Requesting a new application from cluster with 3 NodeManagers
24/09/25 10:52:23 WARN JettyUtils: GET /jobs/ failed: java.util.NoSuchElementException
java.util.NoSuchElementException
        at java.util.Collections$EmptyIterator.next(Collections.java:4191)

处理方法
yarn.site.xml配置中添加

<property>
    <name>yarn.nodemanager.vmem-check-enabled</name>
    <value>false</value>
</property>

报错2:

Exception in thread "main" java.lang.IllegalArgumentException: Required executor memory (1024+384 MB) is above the max threshold (510 MB) of this cluster! Please check the values of 'yarn.scheduler.maximum-allocation-mb' and/or 'yarn.nodemanager.resource.memory-mb'.

修改yarn.scheduler.maximum-allocation-mb  yarn.nodemanager.resource.memory-mb这两个参数

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值