hadoop 2.8.1安装和配置(单机)

一、java环境(略)

二、配置SSH无密码登陆

$ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

$ ssh-keygen -t rsa 
$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys 
$ chmod 0600 ~/.ssh/authorized_keys 

出现sign_and_send_pubkey: signing failed: agent refused operation

eval 'ssh-agent -s'
ssh-add

测试无密码登录

$ ssh localhost 
Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.10.0-32-generic x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

9 packages can be updated.
8 updates are security updates.

Last login: Tue Aug 15 23:16:52 2017 from ::1

三、创建hadoop用户组、用户

sudo addgroup hadoop
sudo adduser --ingroup hadoop hduser
sudo adduser hduser sudo

打算以普通用户(平时常用的这里mymotif)启动,故把它加到hadoop组

sudo usermod -a -G hadoop mymotif

四、下载、解压、改own等

cd
wget http://mirrors.tuna.tsinghua.edu.cn/apache/hadoop/common/hadoop-3.1.1/hadoop-3.1.1.tar.gz
#wget http://apache.fayea.com/hadoop/common/hadoop-2.8.1/hadoop-2.8.1.tar.gz
cd /opt
sudo tar xzvf  ~/hadoop-3.1.1.tar.gz 
sudo mv hadoop-3.1.1 hadoop
sudo chown -R hduser:hadoop hadoop

数据存放目录/home/hduser/tmp

su hduser
cd
mkdir tmp
chmod g+rw -Rf tmp            #为了使得mymotif用户对该目录有读写权限
exit

五、配置

etc/hadoop/core-site.xml文件

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
core-site.xml
 <!-- 指定HDFS老大(namenode)的通信地址 -->
    <property>
        <name>fs.default.name</name>
        <value>hdfs://localhost:9000</value>
    </property>
<!-- 指定hadoop运行时产生文件的存储路径 -->
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/home/hduser/tmp</value>
    </property>
</configuration>

etc/hadoop/hdfs-site.xml

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
                     <!-- 设置hdfs副本数量 -->
      <property>
        <name>dfs.replication</name>
        <value>1</value>
      </property>
      <property>
        <name>dfs.namenode.name.dir</name>
        <value>file:/home/hduser/tmp/dfs/name</value>
      </property>
      <property>
        <name>dfs.datanode.data.dir</name>
        <value>file:/home/hduser/tmp/dfs/data</value>
      </property>
</configuration>

etc/hadoop/hadoop-env.sh 中export JAVA_HOME=${JAVA_HOME}行改成如下内容:

# The java implementation to use.
export JAVA_HOME=/opt/lib/jvm/jdk1.8.0_141
export HADOOP_COMMON_LIB_NATIVE_DIR="/opt/hadoop/lib/native/"
export HADOOP_OPTS="$HADOOP_OPTS -Djava.library.path=/opt/hadoop/lib:/opt/hadoop/lib/native"

修改环境变量

全局变量/etc/profile加下面这段

#Hadoop variables
export HADOOP_INSTALL=/opt/hadoop
export HADOOP_HOME=/opt/hadoop
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
###end of paste

六、Hadoop启动

1)格式化namenode

$ hdfs namenode –format
2)启动NameNode 和 DataNode 守护进程

$ start-dfs.sh
3)启动ResourceManager 和 NodeManager 守护进程

$ start-yarn.sh

【注:start-all.sh 已不被建议使用】

文件存放:

$ hadoop fs -mkdir /user
$ hadoop fs -mkdir /user/hadoop
$ hadoop fs -mkdir /user/input
$ hadoop fs -ls /user/
Found 2 items
drwxr-xr-x   - mymotif supergroup          0 2017-08-18 14:50 /user/hadoop
drwxr-xr-x   - mymotif supergroup          0 2017-08-16 01:10 /user/input
$ hadoop fs -put test.c /user/hadoop/
$ hadoop fs -ls /user/hadoop
Found 1 items
-rw-r--r--   1 mymotif supergroup         66 2017-08-18 14:54 /user/hadoop/test.c
$ hadoop fs -copyFromLocal ~/hello.c /user/input/
$ hadoop fs -copyFromLocal ~/test.c /user/input/
$ hadoop fs -ls /user/input
Found 2 items
-rw-r--r--   1 mymotif supergroup         72 2017-08-19 15:46 /user/input/hello.c
-rw-r--r--   1 mymotif supergroup         66 2017-08-16 01:10 /user/input/test.c

获取

 hadoop fs -get /user/hadoop/test.c mytest.c
或
$ hadoop fs -get hdfs://localhost:9000/user/hadoop/test.c mytest.c

七、验证

$ jps
11856 NameNode
12210 SecondaryNameNode
12821 Jps
12502 NodeManager
12378 ResourceManager
12013 DataNode

管理界面:http://localhost:8088

NameNode界面:http://localhost:50070

HDFS NameNode界面:http://localhost:8042

其它问题:

$ ldd /opt/hadoop/lib/native/libhadoop.so.1.0.0
    linux-vdso.so.1 =>  (0x00007ffe79fb9000)
    libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f7b3feaa000)
    libjvm.so => not found
    libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f7b3fc8d000)
    libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f7b3f8c3000)
    /lib64/ld-linux-x86-64.so.2 (0x0000561243d70000)

 libjvm.so 找不到,执行下面的指令即可:

 sudo ln -s /opt/lib/jvm/jdk1.8.0_141/jre/lib/amd64/server/libjvm.so /lib/x86_64-linux-gnu/libjvm.so

eclipse插件下载地址:http://download.csdn.net/download/darkdragonking/9849522

https://pan.baidu.com/s/1eSpd7zk

java测试代码:

package test;

import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.net.URI;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;

public class ReadHDFS {

	public static String getStringByTXT(String txtFilePath, Configuration conf)
	{

		FSDataInputStream fsr = null;
		BufferedReader bufferedReader = null;
		String lineTxt = null;
		try
		{
			FileSystem fs = FileSystem.get(URI.create(txtFilePath),conf);
			fsr = fs.open(new Path(txtFilePath));
			bufferedReader = new BufferedReader(new InputStreamReader(fsr));		
			while ((lineTxt = bufferedReader.readLine()) != null)
			{
				if(lineTxt.split("\t")[0].trim().equals("序号")){
					return lineTxt;
				}
				
			}
		} catch (Exception e)
		{
			e.printStackTrace();
		} finally
		{
			if (bufferedReader != null)
			{
				try
				{
					bufferedReader.close();
				} catch (IOException e)
				{
					e.printStackTrace();
				}
			}
		}

		return lineTxt;
	}
	/**
	 * @param args
	 */
	public static void main(String[] args) {
		// TODO Auto-generated method stub
		Configuration conf = new Configuration();
		String txtFilePath = "hdfs://localhost:9000/user/input/myhello.txt";
		String mbline = getStringByTXT(txtFilePath, conf);
		System.out.println(mbline);
	}

}

myhello.txt内容

$ hadoop fs -cat hdfs://localhost:9000/user/input/myhello.txt
1序号	学院	班级	学号	姓名	
序号 234	学院	班级	学号	姓名	
序号	学院12 34	班级	学号	姓名

运行结果:

序号	学院12 34	班级	学号	姓名	

enjoy  it!

转载于:https://my.oschina.net/u/2245781/blog/1511473

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值