kettle 连接Hadoop 遇错

kettle从windows中往hdfs中写文件

One

2016/07/19 14:14:53 - Spoon - 正在开始任务...
2016/07/19 14:14:53 - load_hdfs - 开始执行任务
2016/07/19 14:14:53 - load_hdfs - 开始项[Hadoop Copy Files]
2016/07/19 14:14:53 - Hadoop Copy Files - 开始...
2016/07/19 14:14:53 - Hadoop Copy Files - 正在处理行, 源文件/目录: [file:///E:/weblogs_rebuild.txt/weblogs_rebuild.txt] ... 目标文件/目录 : [hdfs://hadoop:8020/data]... 通配符 : [^.*\.txt]
2016/07/19 14:14:53 - cfgbuilder - Warning: The configuration parameter [org] is not supported by the default configuration builder for scheme: sftp
2016/07/19 14:14:53 - cfgbuilder - Warning: The configuration parameter [org] is not supported by the default configuration builder for scheme: sftp
2016/07/19 14:14:53 - cfgbuilder - Warning: The configuration parameter [org] is not supported by the default configuration builder for scheme: sftp
2016/07/19 14:14:53 - Hadoop Copy Files - ERROR (version 6.1.0.1-196, build 1 from 2016-04-07 12.08.49 by buildguy) : 文件系统异常:Could not copy "file:///E:/weblogs_rebuild.txt/weblogs_rebuild.txt" to "hdfs://hadoop:8020/data/weblogs_rebuild.txt".
2016/07/19 14:14:53 - Hadoop Copy Files - ERROR (version 6.1.0.1-196, build 1 from 2016-04-07 12.08.49 by buildguy) : 原因:Could not write to "hdfs://hadoop:8020/data/weblogs_rebuild.txt".
2016/07/19 14:14:53 - Hadoop Copy Files - ERROR (version 6.1.0.1-196, build 1 from 2016-04-07 12.08.49 by buildguy) : 原因:Permission denied: user=Administrator, access=WRITE, inode="/data/weblogs_rebuild.txt":root:hadoop:drwxr-xr-x
    at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:320)
    at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
    at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:213)
    at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
    at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1698)
    at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1682)
    at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1665)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2517)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2452)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2335)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:623)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:397)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2045)

 

 网上一些解决思路

1.修改服务器上hadoop的配置文件hdfs-site.xml中

改为false,重启hadoop,但我试了一下,然后从ambari重启集群,发现又变为true了,不知道什么原因

2.对应目录授权chmod 777,还是报错

3.最后解决方法:

hadoop fs -mkdir /user/Administrator

hadoop fs -chown Administrator:hdfs /user/Administrator

 

 Two

2016/07/20 10:07:03 - Hadoop Copy Files - ERROR (version 6.1.0.1-196, build 1 from 2016-04-07 12.08.49 by buildguy) : 文件系统异常:Could not copy "file:///E:/Test/red.txt" to "hdfs://hadoop:8020/kettle/red.txt".
2016/07/20 10:07:03 - Hadoop Copy Files - ERROR (version 6.1.0.1-196, build 1 from 2016-04-07 12.08.49 by buildguy) : 原因:Could not close the output stream for file "hdfs://hadoop:8020/kettle/red.txt".
2016/07/20 10:07:03 - Hadoop Copy Files - ERROR (version 6.1.0.1-196, build 1 from 2016-04-07 12.08.49 by buildguy) : 原因:Connection timed out: no further information

原因:这是在power服务器上就报这样的错,但同样的方式到x86服务器就可以成功。

具体解决方法:我的另一篇博文Linux启动kettle及linux和windows中kettle往hdfs中写数据(3)http://www.cnblogs.com/womars/p/5718349.html

 

转载于:https://www.cnblogs.com/zeppelin/p/5685665.html

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值