MAC hadoop3 安装

一、hadoop安装

1.安装java

见 https://blog.csdn.net/jiuweiC/article/details/104356751

2. 配置SSH

(1)首先在系统里打开远程登录,位置在 System Preference -> Sharing 中,左边勾选 Remote Login,右边选择 All Users。
如果不执行该步骤 后面会报错

ssh localhost
ssh: connect to host localhost port 22: Connection refused

(2)生成新的keygen否则后面会报错 Permission denied (publickey,password,keyboard-interactive).

 ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa 

注册关键字:

 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys 
 chmod 0600 ~/.ssh/authorized_keys

3. 安装hadoop

brew install hadoop

4. 配置

/usr/local/Cellar/hadoop/3.1.0/libexec/etc/hadoop 主要都在这个目录下

a) hadoop-env.sh
输入/usr/libexec/java_home看看你把 Java 装到哪里了:
你会看到类似酱紫结果:/Library/Java/JavaVirtualMachines/jdk1.8.0_121.jdk/Contents/Home

 /usr/libexec/java_home
 /Library/Java/JavaVirtualMachines/jdk1.8.0_121.jdk/Contents/Home

打开 hadoop-env.sh 文件(位置 etc/hadoop/),找到 # export JAVA_HOME=,改参数如下:

export JAVA_HOME={your java home directory}

把 {your java home directory} 改成你上面查到的 Java 路径,记得去掉注释 #。比如 export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_121.jdk/Contents/Home。
b) core-site.xml

打开 core-site.xml 文件(位置 etc/hadoop/),改参数如下:

<configuration>
 <property>
 <name>fs.defaultFS</name>
 <value>hdfs://localhost:9000</value>
 </property>
</configuration>

c) hdfs-site.xml

打开 hdfs-site.xml 文件(位置 etc/hadoop/),改参数如下:

<configuration>
 <property>
 <name>dfs.replication</name>
 <value>1</value>
 </property>
</configuration>

d) mapred-site.xml

打开 mapred-site.xml 文件(位置 etc/hadoop/),改参数如下:

<configuration>
 <property>
 <name>mapreduce.framework.name</name>
 <value>yarn</value>
 </property>
</configuration>

如果文件后缀是 .xml.example,改为 .xml。
e) yarn-site.xml

打开 yarn-site.xml 文件(位置 etc/hadoop/),改参数如下:

<configuration> 
<property> 
<name>yarn.nodemanager.aux-services</name> 
<value>mapreduce_shuffle</value> 
</property> 
<property> 
<name>yarn.nodemanager.env-whitelist</name>
 <value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value> 
</property> 
</configuration>

5.运行

  • 1.格式化文件系统:
cd /usr/local/Cellar/hadoop/3.3.0/libexec
bin/hdfs namenode -format
  • 2.启动 NameNode 和 DataNode:
sbin/start-dfs.sh

现在你应该可以在浏览器中打开下面的链接看到亲切的 Overview 界面了:
NameNode - http://localhost:9870
下面的命令会报告状态信息,如果无法打开web页面,会提示异常原因。

hdfs dfsadmin -report
2020-08-14 19:30:04,025 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Configured Capacity: 499963174912 (465.63 GB)
Present Capacity: 402799255552 (375.14 GB)
DFS Remaining: 402799251456 (375.14 GB)
DFS Used: 4096 (4 KB)
DFS Used%: 0.00%
Replicated Blocks:
	Under replicated blocks: 0
	Blocks with corrupt replicas: 0
	Missing blocks: 0
	Missing blocks (with replication factor 1): 0
	Low redundancy blocks with highest priority to recover: 0
	Pending deletion blocks: 0
Erasure Coded Block Groups: 
	Low redundancy block groups: 0
	Block groups with corrupt internal blocks: 0
	Missing block groups: 0
	Low redundancy blocks with highest priority to recover: 0
	Pending deletion blocks: 0

-------------------------------------------------
Live datanodes (1):

Name: 127.0.0.1:9866 (localhost)
Hostname: 10.43.43.113
Decommission Status : Normal
Configured Capacity: 499963174912 (465.63 GB)
DFS Used: 4096 (4 KB)
Non DFS Used: 78992306176 (73.57 GB)
DFS Remaining: 402799251456 (375.14 GB)
DFS Used%: 0.00%
DFS Remaining%: 80.57%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Fri Aug 14 19:30:03 CST 2020
Last Block Report: Fri Aug 14 19:25:39 CST 2020
Num of Blocks: 0

若提示

localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused;

则再次执行格式化命令

bin/hdfs namenode -format

让 HDFS 可以被用来执行 MapReduce jobs:

bin/hdfs dfs -mkdir /user
bin/hdfs dfs -mkdir /user/
把 改成你的用户名,记得去掉 <> 。

  • 3.启动 ResourceManager 和 NodeManager:
sbin/start-yarn.sh

现在你应该可以在浏览器中打开下面的链接看到亲切的 All Applications 界面了:

ResourceManager - http://localhost:8088

来自:https://www.jianshu.com/p/0e7f16469d87

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值