linux hadoop-3.3.6 hbase-2.5.7

软件下载

hadoop

https://dlcdn.apache.org/hadoop/common/hadoop-3.3.6/hadoop-3.3.6.tar.gz

可以直接下载到本地,也可以直接下载进虚拟机中 

如果速度较慢,可以用;另一个

  wget https://mirrors.tuna.tsinghua.edu.cn/apache/hadoop/common/hadoop-3.3.6/hadoop-3.3.6.tar.gz

# wget https://mirrors.tuna.tsinghua.edu.cn/apache/hadoop/common/hadoop-3.3.6/hadoop-3.3.6.tar.gz
--2024-09-18 14:09:21--  https://mirrors.tuna.tsinghua.edu.cn/apache/hadoop/common/hadoop-3.3.6/hadoop-3.3.6.tar.gz
Resolving mirrors.tuna.tsinghua.edu.cn (mirrors.tuna.tsinghua.edu.cn)... 101.6.15.130, 2402:f000:1:400::2
Connecting to mirrors.tuna.tsinghua.edu.cn (mirrors.tuna.tsinghua.edu.cn)|101.6.15.130|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 730107476 (696M) [application/octet-stream]
Saving to: ‘hadoop-3.3.6.tar.gz’

100%[=========================================================================================>] 730,107,476 1.76MB/s   in 6m 6s  

2024-09-18 14:15:27 (1.90 MB/s) - ‘hadoop-3.3.6.tar.gz’ saved [730107476/730107476]

 

 hadoop-3.3.6

解压至安装目录

tar -xzvf hadoop-3.3.6.tar.gz  -C ../apps/

检查/etc/hosts中将此主机的ip,hostname配置进去

 

修改配置文件

core-site.xml

<configuration>
  <property>
    <name>fs.defaultFS</name>
    <value>hdfs://10.21.10.111:8020</value>
  </property>
  <property>
    <name>hadoop.tmp.dir</name>
    <value>/data/aidacp/apps/hadoop-3.3.6/tmp</value>
  </property>
  <property>
    <name>hadoop.native.lib</name>
    <value>false</value>
  </property>
  <property>
    <name>hadoop.http.authentication.simple.anonymous.allowed</name>
    <value>true</value>
  </property>
  <property>  
    <name>fs.hdfs.impl</name>  
    <value>org.apache.hadoop.hdfs.DistributedFileSystem</value>  
    <description>The FileSystem for hdfs: uris.</description>  
  </property>
</configuration>

hdfs-site.xml

在hadoop的目录下,新建nn  dn等文件夹

dfs.namenode.http-address不配置端口时,页面访问hadoop的overview访问不到

<configuration>
  <property>
    <name>dfs.replication</name>
    <value>1</value>
  </property>
 
  <property>
    <name>dfs.namenode.name.dir</name>
    <value>/data/aidacp/apps/hadoop-3.3.6/data/mn</value>
  </property>

  <property>
    <name>dfs.datanode.data.dir</name>
    <value>/data/aidacp/apps/hadoop-3.3.6/data/dn</value>
  </property>

  <property>
   <name>dfs.namenode.http-address</name>
  <value>http://kvm-aiswdos-centos76-test-node1:9870</value>
  </property>
</configuration>

hbase-env.sh

若不是默认的端口22,需要配置进具体的端口,否则启动时,ssh登陆连接超时

export HADOOP_SSH_OPTS="-p 22222"
export JAVA_HOME=/data/aidacp/apps/jdk8
export HADOOP_HOME=/data/aidacp/apps/hadoop-3.3.6
export HADOOP_OS_TYPE=${HADOOP_OS_TYPE:-$(uname -s)}

  yarn-site.xml 

<configuration>
    <property>
        <name>yarn.resourcemanager.hostname</name>
        <value>kvm-aiswdos-centos76-test-node1</value>
    </property>
    <property>
        <name>yarn.resourcemanager.webapp.address</name>
        <value>0.0.0.0:8088</value>
    </property>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>
    <property>
        <name>yarn.application.classpath</name>
        <value>/data/aidacp/apps/hadoop-3.3.6/etc/hadoop:/data/aidacp/apps/hadoop-3.3.6/share/hadoop/common/lib/*:/data/aidacp/apps/hadoop-3.3.6/share/hadoop/common/*:/data/aidacp/apps/hadoop-3.3.6/share/hadoop/hdfs:/data/aidacp/apps/hadoop-3.3.6/share/hadoop/hdfs/lib/*:/data/aidacp/apps/hadoop-3.3.6/share/hadoop/hdfs/*:/data/aidacp/apps/hadoop-3.3.6/share/hadoop/mapreduce/*:/data/aidacp/apps/hadoop-3.3.6/share/hadoop/yarn:/data/aidacp/apps/hadoop-3.3.6/share/hadoop/yarn/lib/*:/data/aidacp/apps/hadoop-3.3.6/share/hadoop/yarn/*</value>
    </property>
</configuration>

 mapred-site.xml 

<configuration>
  <property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
  </property>
  
</configuration>

~/.bash_profile


export HADOOP_HOME=/data/aidacp/apps/hadoop-3.3.6
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin
export HDFS_NAMENODE_USER="aidacp"
export HDFS_DATANODE_USER="aidacp"
export HDFS_SECONDARYNAMENODE_USER="aidacp"
export YARN_RESOURCEMANAGER_UER="aidacp"
export YARN_NODEMANAGER_USER="aidacp"
PATH=$PATH:$HOME/.local/bin:$HOME/bin

export PATH

 

./hdfs namenode -format

2024-04-15 16:52:36,695 INFO util.GSet: Computing capacity for map NameNodeRetryCache
2024-04-15 16:52:36,695 INFO util.GSet: VM type       = 64-bit
2024-04-15 16:52:36,695 INFO util.GSet: 0.029999999329447746% max memory 981.5 MB = 301.5 KB
2024-04-15 16:52:36,695 INFO util.GSet: capacity      = 2^15 = 32768 entries
2024-04-15 16:52:36,747 INFO namenode.FSImage: Allocated new BlockPoolId: BP-296281341-10.19.83.151-1713171156732
2024-04-15 16:52:37,148 INFO common.Storage: Storage directory /data/yunwei/apps/hadoop-3.3.6/data/hdfs/nn has been successfully formatted.
2024-04-15 16:52:37,212 INFO namenode.FSImageFormatProtobuf: Saving image file /data/yunwei/apps/hadoop-3.3.6/data/hdfs/nn/current/fsimage.ckpt_0000000000000000000 using no compression
2024-04-15 16:52:37,643 INFO namenode.FSImageFormatProtobuf: Image file /data/yunwei/apps/hadoop-3.3.6/data/hdfs/nn/current/fsimage.ckpt_0000000000000000000 of size 401 bytes saved in 0 seconds .
2024-04-15 16:52:37,702 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
2024-04-15 16:52:37,732 INFO namenode.FSNamesystem: Stopping services started for active state
2024-04-15 16:52:37,732 INFO namenode.FSNamesystem: Stopping services started for standby state
2024-04-15 16:52:37,737 INFO namenode.FSImage: FSImageSaver clean checkpoint: txid=0 when meet shutdown.
2024-04-15 16:52:37,738 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at host-10-19-83-151/10.19.83.151

./start-all.sh

# ./start-all.sh 
WARNING: Attempting to start all Apache Hadoop daemons as aidacp in 10 seconds.
WARNING: This is not a recommended production deployment configuration.
WARNING: Use CTRL-C to abort.
Starting namenodes on [kvm-aiswdos-centos76-test-node1]
Starting datanodes
Starting secondary namenodes [kvm-aiswdos-centos76-test-node1]
Starting resourcemanager
Starting nodemanagers

./stop-all.sh

# ./stop-all.sh 
WARNING: Stopping all Apache Hadoop daemons as aidacp in 10 seconds.
WARNING: Use CTRL-C to abort.
Stopping namenodes on [kvm-aiswdos-centos76-test-node1]
Stopping datanodes
Stopping secondary namenodes [kvm-aiswdos-centos76-test-node1]
Stopping nodemanagers
Stopping resourcemanager

查看页面

hadoop-cluster

hadoop-overview

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值