HDFS的API操作:
1.首先在idea里创建一个Maven工程HdfsClientDemo
2.导入相应的依赖坐标+日志添加(pom.xml)
<dependencies>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>RELEASE</version>
</dependency>
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-core</artifactId>
<version>2.8.2</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<version>2.7.2</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>2.7.2</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-hdfs</artifactId>
<version>2.7.2</version>
</dependency>
<dependency>
<groupId>jdk.tools</groupId>
<artifactId>jdk.tools</artifactId>
<version>1.8</version>
<scope>system</scope>
<systemPath>${JAVA_HOME}/lib/tools.jar</systemPath>
</dependency>
</dependencies>
3.需要在项目的src/main/resources目录下,新建一个文件,命名为“log4j.properties”,在文件中填入
log4j.rootLogger=INFO, stdout
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d %p [%c] - %m%n
log4j.appender.logfile=org.apache.log4j.FileAppender
log4j.appender.logfile.File=target/spring.log
log4j.appender.logfile.layout=org.apache.log4j.PatternLayout
log4j.appender.logfile.layout.ConversionPattern=%d %p [%c] - %m%n
4.创建包名:com.atguigu.hdfs
5.创建HdfsClient类以及上传文件的代码
package com.zpark.hdfs;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.junit.Test;
import java.io.File;
import java.io.IOException;
import java.net.URI;
public class HdfsClient {
@Test
public void put()throws IOException,InterruptedException{
FileSystem fileSystem =FileSystem.get(URI.create("hdfs://hdp-1:9000"),new Configuration(),"root");
fileSystem.copyFromLocalFile(new Path("d:\\1.txt"),new Path("/"));
fileSystem.close();
}
}
6.在启动hdfs
[root@hdp-1 ~]# start-dfs.sh
Starting namenodes on [hdp-1]
hdp-1: starting namenode, logging to /root/apps/hadoop-2.8.1/logs/hadoop-root-namenode-hdp-1.out
hdp-4: starting datanode, logging to /root/apps/hadoop-2.8.1/logs/hadoop-root-datanode-hdp-4.out
hdp-2: starting datanode, logging to /root/apps/hadoop-2.8.1/logs/hadoop-root-datanode-hdp-2.out
hdp-3: starting datanode, logging to /root/apps/hadoop-2.8.1/logs/hadoop-root-datanode-hdp-3.out
hdp-1: starting datanode, logging to /root/apps/hadoop-2.8.1/logs/hadoop-root-datanode-hdp-1.out
Starting secondary namenodes [hdp-2]
hdp-2: starting secondarynamenode, logging to /root/apps/hadoop-2.8.1/logs/hadoop-root-secondarynamenode-hdp-2.out
7.在d盘创建1.txt,在运行idea
8.在登录http://hdp-1:50070/
查看是否有1.txt