从零教你如何获取hadoop2.4源码并使用eclipse关联hadoop2.4源码

我们如果想搞开发,研究源码对我们的帮助很大。不明白原理就如同黑盒子,遇到问题,我们也摸不着思路。所以这里交给大家
一.如何获取源码
二.如何关联源码

一.如何获取源码

1.下载hadoop的maven程序包

(1)官网下载
这里我们先从官网上下载maven包hadoop-2.4.0-src.tar.gz。
官网下载地址

对于不知道怎么去官网下载,可以查看: 新手指导:hadoop官网介绍及如何下载hadoop(2.4)各个版本与查看hadoop API介绍

(2)网盘下载
也可以从网盘下载:
http://pan.baidu.com/s/1kToPuGB

2.通过maven获取源码
获取源码的方式有两种,一种是通过命令行的方式,一种是通过eclipse。这里主要讲通过命令的方式

通过命令的方式获取源码:
1.解压包

 


解压包的时候遇到了下面问题。不过不用管,我们继续往下走
1        : 无法创建文件:D:\hadoop2\hadoop-2.4.0-src\hadoop-yarn-project\hadoop-yarn\hadoop-yarn-server\hadoop-yarn-server-applicationhistoryservice\target\classes\org\apache\hadoop\yarn\server\applicationhistoryservice\ApplicationHistoryClientService$ApplicationHSClientProtocolHandler.class:
路径和文件名总长度不能超过260个字符
系统找不到指定的路径。        D:\hadoop2\hadoop-2.4.0-src.zip
2        : 无法创建文件:D:\hadoop2\hadoop-2.4.0-src\hadoop-yarn-project\hadoop-yarn\hadoop-yarn-server\hadoop-yarn-server-applicationhistoryservice\target\classes\org\apache\hadoop\yarn\server\applicationhistoryservice\timeline\LeveldbTimelineStore$LockMap$CountingReentrantLock.class:系统找不到指定的路径。        D:\hadoop2\hadoop-2.4.0-src.zip
3        : 无法创建文件:D:\hadoop2\hadoop-2.4.0-src\hadoop-yarn-project\hadoop-yarn\hadoop-yarn-server\hadoop-yarn-server-applicationhistoryservice\target\test-classes\org\apache\hadoop\yarn\server\applicationhistoryservice\webapp\TestAHSWebApp$MockApplicationHistoryManagerImpl.class:系统找不到指定的路径。        D:\hadoop2\hadoop-2.4.0-src.zip
4        : 无法创建文件:D:\hadoop2\hadoop-2.4.0-src\hadoop-yarn-project\hadoop-yarn\hadoop-yarn-server\hadoop-yarn-server-resourcemanager\target\test-classes\org\apache\hadoop\yarn\server\resourcemanager\monitor\capacity\TestProportionalCapacityPreemptionPolicy$IsPreemptionRequestFor.class:
路径和文件名总长度不能超过260个字符
系统找不到指定的路径。        D:\hadoop2\hadoop-2.4.0-src.zip
5        : 无法创建文件:D:\hadoop2\hadoop-2.4.0-src\hadoop-yarn-project\hadoop-yarn\hadoop-yarn-server\hadoop-yarn-server-resourcemanager\target\test-classes\org\apache\hadoop\yarn\server\resourcemanager\recovery\TestFSRMStateStore$TestFSRMStateStoreTester$TestFileSystemRMStore.class:系统找不到指定的路径。        D:\hadoop2\hadoop-2.4.0-src.zip
6        : 无法创建文件:D:\hadoop2\hadoop-2.4.0-src\hadoop-yarn-project\hadoop-yarn\hadoop-yarn-server\hadoop-yarn-server-resourcemanager\target\test-classes\org\apache\hadoop\yarn\server\resourcemanager\recovery\TestZKRMStateStore$TestZKRMStateStoreTester$TestZKRMStateStoreInternal.class:
路径和文件名总长度不能超过260个字符
系统找不到指定的路径。        D:\hadoop2\hadoop-2.4.0-src.zip
7        : 无法创建文件:D:\hadoop2\hadoop-2.4.0-src\hadoop-yarn-project\hadoop-yarn\hadoop-yarn-server\hadoop-yarn-server-resourcemanager\target\test-classes\org\apache\hadoop\yarn\server\resourcemanager\recovery\TestZKRMStateStoreZKClientConnections$TestZKClient$TestForwardingWatcher.class:
路径和文件名总长度不能超过260个字符
系统找不到指定的路径。        D:\hadoop2\hadoop-2.4.0-src.zip
8        : 无法创建文件:D:\hadoop2\hadoop-2.4.0-src\hadoop-yarn-project\hadoop-yarn\hadoop-yarn-server\hadoop-yarn-server-resourcemanager\target\test-classes\org\apache\hadoop\yarn\server\resourcemanager\recovery\TestZKRMStateStoreZKClientConnections$TestZKClient$TestZKRMStateStore.class:
路径和文件名总长度不能超过260个字符
系统找不到指定的路径。        D:\hadoop2\hadoop-2.4.0-src.zip
9        : 无法创建文件:D:\hadoop2\hadoop-2.4.0-src\hadoop-yarn-project\hadoop-yarn\hadoop-yarn-server\hadoop-yarn-server-resourcemanager\target\test-classes\org\apache\hadoop\yarn\server\resourcemanager\rmapp\attempt\TestRMAppAttemptTransitions$TestApplicationAttemptEventDispatcher.class:
路径和文件名总长度不能超过260个字符
系统找不到指定的路径。        D:\hadoop2\hadoop-2.4.0-src.zip


2.通过maven获取源码


这里需要说明的是,在使用maven的时候,需要先安装jdk,protoc ,如果没有安装可以参考 win7如何安装maven、安装protoc

(1)进入hadoop-2.4.0-src\hadoop-maven-plugins,运行mvn install
  1. D:\hadoop2\hadoop-2.4.0-src\hadoop-maven-plugins>mvn install
复制代码


显示如下信息

  1. [INFO] Scanning for projects...
  2. [WARNING]
  3. [WARNING] Some problems were encountered while building the effective model for
  4. org.apache.hadoop:hadoop-maven-plugins:maven-plugin:2.4.0
  5. [WARNING] 'build.plugins.plugin.(groupId:artifactId)' must be unique but found d
  6. uplicate declaration of plugin org.apache.maven.plugins:maven-enforcer-plugin @
  7. org.apache.hadoop:hadoop-project:2.4.0, D:\hadoop2\hadoop-2.4.0-src\hadoop-proje
  8. ct\pom.xml, line 1015, column 15
  9. [WARNING]
  10. [WARNING] It is highly recommended to fix these problems because they threaten t
  11. he stability of your build.
  12. [WARNING]
  13. [WARNING] For this reason, future Maven versions might no longer support buildin
  14. g such malformed projects.
  15. [WARNING]
  16. [INFO]
  17. [INFO] Using the builder org.apache.maven.lifecycle.internal.builder.singlethrea
  18. ded.SingleThreadedBuilder with a thread count of 1
  19. [INFO]
  20. [INFO] ------------------------------------------------------------------------
  21. [INFO] Building Apache Hadoop Maven Plugins 2.4.0
  22. [INFO] ------------------------------------------------------------------------
  23. [INFO]
  24. [INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-maven-plugins
  25. ---
  26. [INFO] Executing tasks

  27. main:
  28. [INFO] Executed tasks
  29. [INFO]
  30. [INFO] --- maven-plugin-plugin:3.0:descriptor (default-descriptor) @ hadoop-mave
  31. n-plugins ---
  32. [INFO] Using 'UTF-8' encoding to read mojo metadata.
  33. [INFO] Applying mojo extractor for language: java-annotations
  34. [INFO] Mojo extractor for language: java-annotations found 2 mojo descriptors.
  35. [INFO] Applying mojo extractor for language: java
  36. [INFO] Mojo extractor for language: java found 0 mojo descriptors.
  37. [INFO] Applying mojo extractor for language: bsh
  38. [INFO] Mojo extractor for language: bsh found 0 mojo descriptors.
  39. [INFO]
  40. [INFO] --- maven-resources-plugin:2.2:resources (default-resources) @ hadoop-mav
  41. en-plugins ---
  42. [INFO] Using default encoding to copy filtered resources.
  43. [INFO]
  44. [INFO] --- maven-compiler-plugin:2.5.1:compile (default-compile) @ hadoop-maven-
  45. plugins ---
  46. [INFO] Nothing to compile - all classes are up to date
  47. [INFO]
  48. [INFO] --- maven-plugin-plugin:3.0:descriptor (mojo-descriptor) @ hadoop-maven-p
  49. lugins ---
  50. [INFO] Using 'UTF-8' encoding to read mojo metadata.
  51. [INFO] Applying mojo extractor for language: java-annotations
  52. [INFO] Mojo extractor for language: java-annotations found 2 mojo descriptors.
  53. [INFO] Applying mojo extractor for language: java
  54. [INFO] Mojo extractor for language: java found 0 mojo descriptors.
  55. [INFO] Applying mojo extractor for language: bsh
  56. [INFO] Mojo extractor for language: bsh found 0 mojo descriptors.
  57. [INFO]
  58. [INFO] --- maven-resources-plugin:2.2:testResources (default-testResources) @ ha
  59. doop-maven-plugins ---
  60. [INFO] Using default encoding to copy filtered resources.
  61. [INFO]
  62. [INFO] --- maven-compiler-plugin:2.5.1:testCompile (default-testCompile) @ hadoo
  63. p-maven-plugins ---
  64. [INFO] No sources to compile
  65. [INFO]
  66. [INFO] --- maven-surefire-plugin:2.16:test (default-test) @ hadoop-maven-plugins
  67. ---
  68. [INFO] No tests to run.
  69. [INFO]
  70. [INFO] --- maven-jar-plugin:2.3.1:jar (default-jar) @ hadoop-maven-plugins ---
  71. [INFO] Building jar: D:\hadoop2\hadoop-2.4.0-src\hadoop-maven-plugins\target\had
  72. oop-maven-plugins-2.4.0.jar
  73. [INFO]
  74. [INFO] --- maven-plugin-plugin:3.0:addPluginArtifactMetadata (default-addPluginA
  75. rtifactMetadata) @ hadoop-maven-plugins ---
  76. [INFO]
  77. [INFO] --- maven-site-plugin:3.3:attach-descriptor (attach-descriptor) @ hadoop-
  78. maven-plugins ---
  79. [INFO]
  80. [INFO] --- maven-install-plugin:2.3.1:install (default-install) @ hadoop-maven-p
  81. lugins ---
  82. [INFO] Installing D:\hadoop2\hadoop-2.4.0-src\hadoop-maven-plugins\target\hadoop
  83. -maven-plugins-2.4.0.jar to C:\Users\hyj\.m2\repository\org\apache\hadoop\hadoop
  84. -maven-plugins\2.4.0\hadoop-maven-plugins-2.4.0.jar
  85. [INFO] Installing D:\hadoop2\hadoop-2.4.0-src\hadoop-maven-plugins\pom.xml to C:
  86. \Users\hyj\.m2\repository\org\apache\hadoop\hadoop-maven-plugins\2.4.0\hadoop-ma
  87. ven-plugins-2.4.0.pom
  88. [INFO] ------------------------------------------------------------------------
  89. [INFO] BUILD SUCCESS
  90. [INFO] ------------------------------------------------------------------------
  91. [INFO] Total time: 4.891 s
  92. [INFO] Finished at: 2014-06-23T14:47:33+08:00
  93. [INFO] Final Memory: 21M/347M
  94. [INFO] ------------------------------------------------------------------------
复制代码

部分截图如下:

 


 



(2)运行

  1. mvn eclipse:eclipse -DskipTests
复制代码
这时候注意,我们进入的是hadoop_home,我这里是D:\hadoop2\hadoop-2.4.0-src

部分信息如下
  1. [INFO]
  2. [INFO] ------------------------------------------------------------------------
  3. [INFO] Reactor Summary:
  4. [INFO]
  5. [INFO] Apache Hadoop Main ................................ SUCCESS [  0.684 s]
  6. [INFO] Apache Hadoop Project POM ......................... SUCCESS [  0.720 s]
  7. [INFO] Apache Hadoop Annotations ......................... SUCCESS [  0.276 s]
  8. [INFO] Apache Hadoop Project Dist POM .................... SUCCESS [  0.179 s]
  9. [INFO] Apache Hadoop Assemblies .......................... SUCCESS [  0.121 s]
  10. [INFO] Apache Hadoop Maven Plugins ....................... SUCCESS [  1.680 s]
  11. [INFO] Apache Hadoop MiniKDC ............................. SUCCESS [  1.802 s]
  12. [INFO] Apache Hadoop Auth ................................ SUCCESS [  1.024 s]
  13. [INFO] Apache Hadoop Auth Examples ....................... SUCCESS [  0.160 s]
  14. [INFO] Apache Hadoop Common .............................. SUCCESS [  1.061 s]
  15. [INFO] Apache Hadoop NFS ................................. SUCCESS [  0.489 s]
  16. [INFO] Apache Hadoop Common Project ...................... SUCCESS [  0.056 s]
  17. [INFO] Apache Hadoop HDFS ................................ SUCCESS [  2.770 s]
  18. [INFO] Apache Hadoop HttpFS .............................. SUCCESS [  0.965 s]
  19. [INFO] Apache Hadoop HDFS BookKeeper Journal ............. SUCCESS [  0.629 s]
  20. [INFO] Apache Hadoop HDFS-NFS ............................ SUCCESS [  0.284 s]
  21. [INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.061 s]
  22. [INFO] hadoop-yarn ....................................... SUCCESS [  0.052 s]
  23. [INFO] hadoop-yarn-api ................................... SUCCESS [  0.842 s]
  24. [INFO] hadoop-yarn-common ................................ SUCCESS [  0.322 s]
  25. [INFO] hadoop-yarn-server ................................ SUCCESS [  0.065 s]
  26. [INFO] hadoop-yarn-server-common ......................... SUCCESS [  0.972 s]
  27. [INFO] hadoop-yarn-server-nodemanager .................... SUCCESS [  0.580 s]
  28. [INFO] hadoop-yarn-server-web-proxy ...................... SUCCESS [  0.379 s]
  29. [INFO] hadoop-yarn-server-applicationhistoryservice ...... SUCCESS [  0.281 s]
  30. [INFO] hadoop-yarn-server-resourcemanager ................ SUCCESS [  0.378 s]
  31. [INFO] hadoop-yarn-server-tests .......................... SUCCESS [  0.534 s]
  32. [INFO] hadoop-yarn-client ................................ SUCCESS [  0.307 s]
  33. [INFO] hadoop-yarn-applications .......................... SUCCESS [  0.050 s]
  34. [INFO] hadoop-yarn-applications-distributedshell ......... SUCCESS [  0.202 s]
  35. [INFO] hadoop-yarn-applications-unmanaged-am-launcher .... SUCCESS [  0.194 s]
  36. [INFO] hadoop-yarn-site .................................. SUCCESS [  0.057 s]
  37. [INFO] hadoop-yarn-project ............................... SUCCESS [  0.066 s]
  38. [INFO] hadoop-mapreduce-client ........................... SUCCESS [  0.091 s]
  39. [INFO] hadoop-mapreduce-client-core ...................... SUCCESS [  1.321 s]
  40. [INFO] hadoop-mapreduce-client-common .................... SUCCESS [  0.786 s]
  41. [INFO] hadoop-mapreduce-client-shuffle ................... SUCCESS [  0.456 s]
  42. [INFO] hadoop-mapreduce-client-app ....................... SUCCESS [  0.508 s]
  43. [INFO] hadoop-mapreduce-client-hs ........................ SUCCESS [  0.834 s]
  44. [INFO] hadoop-mapreduce-client-jobclient ................. SUCCESS [  0.541 s]
  45. [INFO] hadoop-mapreduce-client-hs-plugins ................ SUCCESS [  0.284 s]
  46. [INFO] Apache Hadoop MapReduce Examples .................. SUCCESS [  0.851 s]
  47. [INFO] hadoop-mapreduce .................................. SUCCESS [  0.099 s]
  48. [INFO] Apache Hadoop MapReduce Streaming ................. SUCCESS [  0.742 s]
  49. [INFO] Apache Hadoop Distributed Copy .................... SUCCESS [  0.335 s]
  50. [INFO] Apache Hadoop Archives ............................ SUCCESS [  0.397 s]
  51. [INFO] Apache Hadoop Rumen ............................... SUCCESS [  0.371 s]
  52. [INFO] Apache Hadoop Gridmix ............................. SUCCESS [  0.230 s]
  53. [INFO] Apache Hadoop Data Join ........................... SUCCESS [  0.184 s]
  54. [INFO] Apache Hadoop Extras .............................. SUCCESS [  0.217 s]
  55. [INFO] Apache Hadoop Pipes ............................... SUCCESS [  0.048 s]
  56. [INFO] Apache Hadoop OpenStack support ................... SUCCESS [  0.244 s]
  57. [INFO] Apache Hadoop Client .............................. SUCCESS [  0.590 s]
  58. [INFO] Apache Hadoop Mini-Cluster ........................ SUCCESS [  0.230 s]
  59. [INFO] Apache Hadoop Scheduler Load Simulator ............ SUCCESS [  0.650 s]
  60. [INFO] Apache Hadoop Tools Dist .......................... SUCCESS [  0.334 s]
  61. [INFO] Apache Hadoop Tools ............................... SUCCESS [  0.042 s]
  62. [INFO] Apache Hadoop Distribution ........................ SUCCESS [  0.144 s]
  63. [INFO] ------------------------------------------------------------------------
  64. [INFO] BUILD SUCCESS
  65. [INFO] ------------------------------------------------------------------------
  66. [INFO] Total time: 31.234 s
  67. [INFO] Finished at: 2014-06-23T14:55:08+08:00
  68. [INFO] Final Memory: 84M/759M
  69. [INFO] ------------------------------------------------------------------------
复制代码
这时候,我们已经把源码给下载下来了。这时候,我们会看到文件会明显增大。


 



3.关联eclipse源码

加入我们以下程序
  hadoop2.2mapreduce例子.rar (1.14 MB, 下载次数: 68, 售价: 1 云币) 
如下图示,对他们进行了打包
 

这两个文件, MaxTemperature.zip为mapreduce例子,mockito-core-1.8.5.jar为mapreduce例子所引用的包
这里需要说明的是,mapreduce为2.2,但是不影响关联源码,只是交给大家该如何关联源码
我们解压之后,导入eclipse
(对于导入项目不熟悉,参考 零基础教你如何导入eclipse项目

 


我们导入之后,看到很多的红线,这些其实都是没有引用包,下面我们开始解决这些语法问题。
一、解决导入jar包
(1)引入mockito-core-1.8.5.jar

(2)hadoop2.4编译包中的jar文件,这些文件的位置如下:

hadoop_home中share\hadoop文件夹下,具体我这的位置D:\hadoop2\hadoop-2.4.0\share\hadoop
找到里面的jar包,举例如下:lib文件中的jar包,以及下面的jar包都添加到buildpath中。
如果对于引用包,不知道该如何添加这些jar包,参考 hadoop开发方式总结及操作指导
注意的是,我们这里是引入的是编译包,编译的下载hadoop--642.4.0.tar.gz
链接: http://pan.baidu.com/s/1c0vPjG0 密码:xj6l)


更多包下载可以参考 hadoop家族、strom、spark、Linux、flume等jar包、安装包汇总下载

 



 


二、关联源码
1.我们导入jar包之后,就没有错误了,如下图所示

 

2.找不到源码

当我们想看一个类或则函数怎么实现的时候,通过Open Call Hierarchy,却找不到源文件。

 



 


3.Attach Source

 

上面三处,我们按照顺序添加即可,我们选定压缩包之后,单击 确定,ok了,我们的工作已经完毕。

注意: 对于hadoop-2.2.0-src.zip则是我们上面通过maven下载的源码,然后压缩的文件, 记得一定是压缩文件zip的形式


4.验证关联后查看源码

我们再次执行上面操作,通过Open Call Hierarchy




看到下面内容



 


然后我们双击上图主类,即红字部分,我们看到下面内容:

 


问题:
细心的同学,这里面我们产生一个问题,因为我们看到的是.class文件,而不是.java文件。那么他会不会和我们所看到的.java文件不一样那。
其实是一样的,感兴趣的同学,可以验证一下。
  • 2
    点赞
  • 8
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
第1章 HDFS 1 1.1 HDFS概述 1 1.1.1 HDFS体系结构 1 1.1.2 HDFS基本概念 2 1.2 HDFS通信协议 4 1.2.1 Hadoop RPC接口 4 1.2.2 流式接口 20 1.3 HDFS主要流程 22 1.3.1 HDFS客户端读流程 22 1.3.2 HDFS客户端写流程 24 1.3.3 HDFS客户端追加写流程 25 1.3.4 Datanode启动、心跳以及执行名字节点指令流程 26 1.3.5 HA切换流程 27 第2章 Hadoop RPC 29 2.1 概述 29 2.1.1 RPC框架概述 29 2.1.2 Hadoop RPC框架概述 30 2.2 Hadoop RPC的使用 36 2.2.1 Hadoop RPC使用概述 36 2.2.2 定义RPC协议 40 2.2.3 客户端获取Proxy对象 45 2.2.4 服务器获取Server对象 54 2.3 Hadoop RPC实现 63 2.3.1 RPC类实现 63 2.3.2 Client类实现 64 2.3.3 Server类实现 76 第3章 Namenode(名字节点) 88 3.1 文件系统树 88 3.1.1 INode相关类 89 3.1.2 Feature相关类 102 3.1.3 FSEditLog类 117 3.1.4 FSImage类 138 3.1.5 FSDirectory类 158 3.2 数据块管理 162 3.2.1 Block、Replica、BlocksMap 162 3.2.2 数据块副本状态 167 3.2.3 BlockManager类(done) 177 3.3 数据节点管理 211 3.3.1 DatanodeDescriptor 212 3.3.2 DatanodeStorageInfo 214 3.3.3 DatanodeManager 217 3.4 租约管理 233 3.4.1 LeaseManager.Lease 233 3.4.2 LeaseManager 234 3.5 缓存管理 246 3.5.1 缓存概念 247 3.5.2 缓存管理命令 247 3.5.3 HDFS集中式缓存架构 247 3.5.4 CacheManager类实现 248 3.5.5 CacheReplicationMonitor 250 3.6 ClientProtocol实现 251 3.6.1 创建文件 251 3.6.2 追加写文件 254 3.6.3 创建新的数据块 257 3.6.4 放弃数据块 265 3.6.5 关闭文件 266 3.7 Namenode的启动和停止 268 3.7.1 安全模式 268 3.7.2 HDFS High Availability 276 3.7.3 名字节点的启动 301 3.7.4 名字节点的停止 306 第4章 Datanode(数据节点) 307 4.1 Datanode逻辑结构 307 4.1.1 HDFS 1.X架构 307 4.1.2 HDFS Federation 308 4.1.3 Datanode逻辑结构 310 4.2 Datanode存储 312 4.2.1 Datanode升级机制 312 4.2.2 Datanode磁盘存储结构 315 4.2.3 DataStorage实现 317 4.3 文件系统数据集 334 4.3.1 Datanode上数据块副本的状态 335 4.3.2 BlockPoolSlice实现 335 4.3.3 FsVolumeImpl实现 342 4.3.4 FsVolumeList实现 345 4.3.5 FsDatasetImpl实现 348 4.4 BlockPoolManager 375 4.4.1 BPServiceActor实现 376 4.4.2 BPOfferService实现 389 4.4.3 BlockPoolManager实现 396 4.5 流式接口 398 4.5.1 DataTransferProtocol定义 398 4.5.2 Sender和Receiver 399 4.5.3 DataXceiverServer 403 4.5.4 DataXceiver 406 4.5.5 读数据 408 4.5.6 写数据(done) 423 4.5.7 数据块替换、数据块拷贝和读数据块校验 437 4.5.8 短路读操作 437 4.6 数据块扫描器 437 4.6.1 DataBlockScanner实现 438 4.6.2 BlockPoolSliceScanner实现 439 4.7 DirectoryScanner 442 4.8 DataNode类的实现 443 4.8.1 DataNode的启动 444 4.8.2 DataNode的关闭 446 第5章 HDFS客户端 447 5.1 DFSClient实现 447 5.1.1 构造方法 448 5.1.2 关闭方法 449 5.1.3 文件系统管理与配置方法 450 5.1.4 HDFS文件与操作方法 451 5.1.5 HDFS文件读写方法 452 5.2 文件读操作与输入流 452 5.2.1 打开文件 452 5.2.2 读操作――DFSInputStream实现 461 5.3 文件短路读操作 481 5.3.1 短路读共享内存 482 5.3.2 DataTransferProtocol 484 5.3.3 DFSClient短路读操作流程 488 5.3.4 Datanode短路读操作流程 509 5.4 文件写操作与输出流 512 5.4.1 创建文件 512 5.4.2 写操作――DFSOutputStream实现 516 5.4.3 追加写操作 543 5.4.4 租约相关 546 5.4.5 关闭输出流 548 5.5 HDFS常用工具 549 5.5.1 FsShell实现 550 5.5.2 DFSAdmin实现 552

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值