如果要在windows下连接虚拟机上的Hadoop集群,就必须要保证windows下有java环境和hadoop环境。
一、环境准备
1、jdk1.8
2、idea2019
3、Linux上hadoop集群,版本为hadoop2.6.0-cdh5.14.2
二、win10上安装hadoop环境
1、下载hadoop包
网站链接:
https://archive.apache.org/dist/hadoop/common/
本文下载版本为:hadoop2.6.0-cdh5.14.0
解压在本地路径:D:\soft\hadoop-2.6.0-cdh5.14.0
2、配置hadoop环境变量
计算机—属性—高级系统设置—高级—环境变量
(1)系统变量—新建HADOOP_HOME变量,变量值为hadoop解压安装路径:D:\soft\hadoop-2.6.0-cdh5.14.0
(2)系统变量—编辑Path,添加:D:\soft\hadoop-2.6.0-cdh5.14.0\bin
3、下载hadoop.dll和winutils.exe
在windows下运行hadoop还需要hadoop.dll和winutils.exe这两个文件,各版本下载地址https://github.com/cdarlint/winutils,也可以通过https://pan.baidu.com/s/1hrNXq3y#list/path=%2F下载。下载完成之后,将hadoop.dll和winutils.exe文件放到hadoop文件夹下的bin目录下,并且需要将hadoop.dll复制一份到C:\Windows\System32
。
4、修改D:\soft\hadoop-2.6.0-cdh5.14.0\etc\hadoop
文件下的配置文件
(1)修改hadoop-env.cmd
文件
(2)修改core-site.xml
文件
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://node01:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/export/servers/hadoop-2.6.0-cdh5.14.0/hadoopDatas/tempDatas</value>
</property>
<!-- 缓冲区大小,实际工作中根据服务器性能动态调整 -->
<property>
<name>io.file.buffer.size</name>
<value>4096</value>
</property>
<!-- 开启hdfs的垃圾桶机制,删除掉的数据可以从垃圾桶中回收,单位分钟 -->
<property>
<name>fs.trash.interval</name>
<value>10080</value>
</property>
<property>
<name>hadoop.proxyuser.hue.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.hue.groups</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.admin.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.admin.groups</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.httpfs.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.httpfs.groups</name>
<value>*</value>
</property>
</configuration>
(3)修改hdfs-site.xml
文件
<configuration>
<!-- NameNode存储元数据信息的路径,实际工作中,一般先确定磁盘的挂载目录,然后多个目录用,进行分割 -->
<!-- 集群动态上下线
<property>
<name>dfs.hosts</name>
<value>/export/servers/hadoop-2.7.4/etc/hadoop/accept_host</value>
</property>
<property>
<name>dfs.hosts.exclude</name>
<value>/export/servers/hadoop-2.7.4/etc/hadoop/deny_host</value>
</property>
-->
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>node01:50090</value>
</property>
<property>
<name>dfs.namenode.http-address</name>
<value>node01:50070</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:///export/servers/hadoop-2.6.0-cdh5.14.0/hadoopDatas/namenodeDatas</value>
</property>
<!-- 定义dataNode数据存储的节点位置,实际工作中,一般先确定磁盘的挂载目录,然后多个目录用,进行分割 -->
<property>
<name>dfs.datanode.data.dir</name>
<value>file:///export/servers/hadoop-2.6.0-cdh5.14.0/hadoopDatas/datanodeDatas</value>
</property>
<property>
<name>dfs.namenode.edits.dir</name>
<value>file:///export/servers/hadoop-2.6.0-cdh5.14.0/hadoopDatas/dfs/nn/edits</value>
</property>
<property>
<name>dfs.namenode.checkpoint.dir</name>
<value>file:///export/servers/hadoop-2.6.0-cdh5.14.0/hadoopDatas/dfs/snn/name</value>
</property>
<property>
<name>dfs.namenode.checkpoint.edits.dir</name>
<value>file:///export/servers/hadoop-2.6.0-cdh5.14.0/hadoopDatas/dfs/nn/snn/edits</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
<property>
<name>dfs.blocksize</name>
<value>134217728</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
</configuration>
(4)修改yarn-site.xml
文件
<configuration>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>node01</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.log-aggregation-enable</name>
<value>true</value>
</property>
<property>
<name>yarn.log-aggregation.retain-seconds</name>
<value>604800</value>
</property>
</configuration>
(5)修改mapred-site.xml.template
文件
首先要修改文件名,把mapred-site.xml.template
改成marped-site.xml
,然后
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.job.ubertask.enable</name>
<value>true</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>node01:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>node01:19888</value>
</property>
</configuration>
5、验证hadoop是否安装成功
win+R
打开命令提示符窗口,输入hadoop -version
,如果显示如下内容,则表明hadoop环境配置成功。
三、hdfs上的javaAPI开发测试
1、IDEA创建maven工程
2、创建module
3、修改module文件的pom.xml,添加下面内容,并import changes
<repositories>
<repository>
<id>cloudera</id>
<url>https://repository.cloudera.com/artifactory/cloudera-repos/</url>
</repository>
</repositories>
<dependencies>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>2.6.0-mr1-cdh5.14.2</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<version>2.6.0-cdh5.14.2</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-hdfs</artifactId>
<version>2.6.0-cdh5.14.2</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-mapreduce-client-core</artifactId>
<version>2.6.0-cdh5.14.2</version>
</dependency>
<!-- https://mvnrepository.com/artifact/junit/junit -->
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.11</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.testng</groupId>
<artifactId>testng</artifactId>
<version>RELEASE</version>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.0</version>
<configuration>
<source>1.8</source>
<target>1.8</target>
<encoding>UTF-8</encoding>
<!-- <verbal>true</verbal>-->
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>2.4.3</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<minimizeJar>true</minimizeJar>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
结果如下,即为导入仓库成功
4、开发hdfs的javaAPI操作
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.junit.Test;
import java.io.IOException;
public class HDFSOperate{
@Test
public void mkdirToHdfs() throws IOException {
Configuration configuration = new Configuration();
configuration.set("fs.defaultFS","hdfs://node01:8020");
//FileSystem文件系统对象
FileSystem fs = FileSystem.get(configuration);
//FileSystem的mkdir方法
boolean b = fs.mkdirs(new Path("/zzz/dir01"));
//FileSystem关闭
fs.close();
}
}
运行结果:通过网页查看,显示创建文件夹成功。
四、IDEA连接Linux上的Hadoop集群,成功!
参考文章:
windows10下配置hadoop2.8.5超详细过程
windows10下使用idea远程调试hadoop集群
Win10下Hadoop环境的配置
win10下IDEA连接虚拟机上的HDFS实现文件操作