hadoop伪分布式配置

参数参考网址 https://hadoop.apache.org/docs/r2.6.5/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml

设置免密

#这里说下,这个是以当前hostname做的免密,所以先要改hostname,hosts

ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

设置环境变量

export JAVA_HOME=/usr/local/jdk1.8
export CLASSPATH=.:$JAVA_HOME/lib/tools.jar:$JAVA_HOME/lib/dt.jar
export HADOOP_HOME=/usr/local/hadoop
export PATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH

设置hadoop java环境

vi etc/hadoop/hadoop-env.sh
export JAVA_HOME=/usr/local/jdk1.8

设置hdfs访问入口

#这里主机hosts做别名,不要用localhost

vi core-site.xml

<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://node01:9000</value>
    </property>
</configuration>

设置各个角色的目录

vi hdfs-default.xml

<configuration>
<!-- NammNode 数据目录 元数据 校检数据-->
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>file:///opt/bigdata/nndata</value>
    </property>
<!-- DataNode 数据目录 文件数据-->
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>file:///opt/bigdata/dndata</value>
    </property>
<!-- SecondaryNameNode 数据目录 聚合镜像 edit.log -->
    <property>
        <name>dfs.namenode.checkpoint.dir</name>
        <value>file:///opt/bigdata/snndata</value>
    </property>
<!-- 块备份 -->
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
<!--secondary namenode 访问入口-->
    <property>
        <name>dfs.namenode.secondary.http-address</name>
        <value>node01:50090</value>
    </property>
</configuration>

配置slave 就是DataNode角色

vi etc/hadoop/slaves
node01

初始化NameNode数据

hdfs namenode -format
#验证输出 successfully标识成功,下面fsimage创建位置
20/06/06 01:40:07 INFO namenode.FSImage: Allocated new BlockPoolId: BP-862864389-192.168.58.147-1591422007835
20/06/06 01:40:07 INFO common.Storage: Storage directory /opt/bigdata/nndata has been successfully formatted.
20/06/06 01:40:07 INFO namenode.FSImageFormatProtobuf: Saving image file /opt/bigdata/nndata/current/fsimage.ckpt_0000000000000000000 using no compression
20/06/06 01:40:08 INFO namenode.FSImageFormatProtobuf: Image file /opt/bigdata/nndata/current/fsimage.ckpt_0000000000000000000 of size 321 bytes saved in 0 seconds.
20/06/06 01:40:08 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0

验证

#NameNode 初始化验证
[root@node01 hadoop]# ls -l  /opt/bigdata/nndata/current/
总用量 16
-rw-r--r-- 1 root root 321 6月   6 01:40 fsimage_0000000000000000000
-rw-r--r-- 1 root root  62 6月   6 01:40 fsimage_0000000000000000000.md5
-rw-r--r-- 1 root root   2 6月   6 01:40 seen_txid
-rw-r--r-- 1 root root 205 6月   6 01:40 VERSION

启动dfs

[root@node01 hadoop]# start-dfs.sh 
Starting namenodes on [node01]
node01: namenode running as process 3208. Stop it first.
node01: datanode running as process 3297. Stop it first.
Starting secondary namenodes [node01]
node01: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-root-secondarynamenode-node01.out

验证节点是否全启动

[root@node01 hadoop]# jps
3297 DataNode
3894 Jps
3208 NameNode
3788 SecondaryNameNode

文件验证

[root@node01 hadoop]# tree /opt/bigdata/
/opt/bigdata/
├── dndata
│   ├── current
│   │   ├── BP-1025112156-192.168.80.101-1591428050114
│   │   │   ├── current
│   │   │   │   ├── finalized
│   │   │   │   ├── rbw
│   │   │   │   └── VERSION
│   │   │   ├── dncp_block_verification.log.curr
│   │   │   └── tmp
│   │   └── VERSION
│   └── in_use.lock
├── nndata
│   ├── current
│   │   ├── edits_inprogress_0000000000000000001
│   │   ├── fsimage_0000000000000000000
│   │   ├── fsimage_0000000000000000000.md5
│   │   ├── seen_txid
│   │   └── VERSION
│   └── in_use.lock
└── snndata
    └── in_use.lock
10 directories, 11 files

重启dfs,再看文件,fsiamge整点新建一个,或者namenode启动时创建一个

[root@node01 hadoop]# start-dfs.sh
Starting namenodes on [node01]
node01: starting namenode, logging to /usr/local/hadoop/logs/hadoop-root-namenode-node01.out
node01: starting datanode, logging to /usr/local/hadoop/logs/hadoop-root-datanode-node01.out
Starting secondary namenodes [node01]
node01: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-root-secondarynamenode-node01.out

[root@node01 hadoop]# tree /opt/bigdata/
/opt/bigdata/
├── dndata
│   ├── current
│   │   ├── BP-1025112156-192.168.80.101-1591428050114
│   │   │   ├── current
│   │   │   │   ├── dfsUsed
│   │   │   │   ├── finalized
│   │   │   │   ├── rbw
│   │   │   │   └── VERSION
│   │   │   ├── dncp_block_verification.log.curr
│   │   │   └── tmp
│   │   └── VERSION
│   └── in_use.lock
├── nndata
│   ├── current
│   │   ├── edits_0000000000000000001-0000000000000000003
│   │   ├── edits_inprogress_0000000000000000004
│   │   ├── fsimage_0000000000000000000
│   │   ├── fsimage_0000000000000000000.md5
│   │   ├── seen_txid
│   │   └── VERSION
│   └── in_use.lock
└── snndata
    └── in_use.lock

10 directories, 13 files
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值