hadood2.6.0启动datanode报错 hadoop错误日志输出位置/hadoop2.6.0/logs/hadoop-tucl-datanode-tucl.log java.io.IOException: Incompatible clusterIDs in /usr/hadoop/data: namenode clusterID = CID-faa65317-6867-4e61-b366-19370a96459c; dat
Mysql与oracle生成随机整数 oracle:1.小数(0~1)select dbms_random.value from dual ;2、指定范围内的小数(0~100)select dbms_random.value(0,100) from dual ;3、指定范围内的整数(0~100)select trunc(dbms_random.value(0,100)) from dual ;4、指定长
log4j简单示例 1.新建一个java project 项目2.导入log4j的jar包,我用的是log4j-1.2.17.jar3.新建log4j.properties文件,在src目录下:内容如下:log4j.rootLogger=debug,stdout log4j.appender.stdout=org.apache.log4j.ConsoleAppenderlog4j.appe
WdatePicker日历控件使用方法 1. 跨无限级框架显示 无论你把日期控件放在哪里,你都不需要担心会被外层的iframe所遮挡进而影响客户体验,因为My97日期控件是可以跨无限级框架显示的 示例2-7 跨无限级框架演示可无限跨越框架iframe,无论怎么嵌套框架都不必担心了,即使有滚动条也不怕 2. 民国年日历和其他特殊日历 当年份格式设置为yyy格式时,利用年份差量属性yearOffset(默
LinkedHashMap与HashMap 由于自己实践生成假数据,发现用hashmap生成的数据的位置都对不上,key与value不是按照输入的顺序;LinkedHashMap生成的数据与输入的顺序一致。从网上找了个例子放这,以后方便查看。 importjava.util.HashMap; importjava.util.Iterator; importjava.util.L
记录spring的一些注解问题 1.@Autowired的解释这个注解就是spring可以自动帮你把bean里面引用的对象的setter/getter方法省略,它会自动帮你set/get。 这样你在userService里面要做一个userDao的setter/getter方法。但如果你用了@Autowired的话,你只需要在UserService的实现类中声明即可。
Server Tomcat v6.0 Server at localhost was unable to start within 45 seconds. If the server requires tomcat启动,报错误日志:Server Tomcat v6.0 Server at localhost was unable to start within 45 seconds. If the server requires more time, try increasing the timeout in the server editor.双击server下的tomcat,打开tomc
搭建hadoop伪分布节点时ssh的设置 目标:ssh localhost 无需键入密码步骤 :前提:确认主机安装ssh shhd 分别是 ssh客户端和server端使用hadoop账户方法一:ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsacat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys[r
org.eclipse.core.runtime.CoreException: D:\workspace\hadoop-1.1.2\build.xml:83: Execute failed: java 在win7用Ant编译hadoop工程的时候,遇到了一个报错,如下: org.eclipse.core.runtime.CoreException: D:\workspace\hadoop-1.1.2\build.xml:83: Execute failed: java.io.IOException: Cannot run program "sed" 打开build.xml文件,找到s
hadoop.security.AccessControlRxception:Pression denied user=xxx access=READ_EXECUTE,inode=".staging" hadoop.security.AccessControlRxception:Pression denied user=xxx access=READ_EXECUTE,inode=".staging" inode="user":hadoop:supergroup:rwxr-xr-xsulution:added this entry to conf/hdfs-site.xmldf
Unknown protocol to name node: org.apache.hadoop.mapred.JobSubmissionProtocol Cannot connect to the Map/Reduce location: hadoopjava.io.IOException: Unknown protocol to name node: org.apache.hadoop.mapred.JobSubmissionProtocolat org.apache.hadoop.hdfs.server.namenode.NameNod
Permission denied:user=tucailing ,access=READ_EXECUTE inode=".staging":root:supergroup:rwx 记录一下,解决方法是:mapreduce的jobtracker和namenode端口配置错误了
Linux学习之旅 (异常Cannot write file to virtual machine. Aborting the file copy operation) 转自:http://hi.baidu.com/xyqqj_lr/item/7012f2e4c64c93255a7cfb5c在我本地搞了个虚拟机,放了linux5.4的iso文件,系统成功装好。发现装好之后是英文版的,想想之前读书的时候是装中文的版的,于是又想让里面的文字中文显示,百度一下,有了解决方法,安装这个网址(http://zhidao.baidu.com/question/416558
pig0.9.2安装与配置 一、下载pig在http://mirrors.cnnic.cn/apache/pig/上下载 pig-0.9.2.tar.gz,我的是下载到/usr/local/目录 二、解压安装pig打开进入到local目录下,执行# tar -zxvf pig-0.9.2.tar.gz解压到pig-0.9.2目录下三、配置环境变量# vi /etc/profil
DataNode: java.io.IOException: Incompatible namespaceIDs in /dfs/dfs/data: namenode namespaceID = 69 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Incompatible namespaceIDs in /dfs/dfs/data: namenode namespaceID = 698802253; datanode namespaceID = 817847838at org.apach
oozie配置sharelib,java.lang.NoClassDefFoundError: org/apache/pig/Main 作为笔记:sharelib的配置在用oozie调用pig时,用到sharelib,当时是直接运行的,然后就报错了.2013-07-03 20:44:48,504 WARN PigActionExecutor:542 - USER[root] GROUP[-] TOKEN[] APP[pig-wf] JOB[0000004-130703183035408-oozie-root-W] A
df:查看linux剩余空间大小 [root@localhost conf]# df Filesystem 1K-blocks Used Available Use% Mounted on/dev/sda2 3960348 3404848 351076 91% //dev/sda5 14270000 275044 132
FSNamesystem: Not able to place enough replicas, still in need of 1 2013-07-04 01:21:50,677 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Not able to place enough replicas, still in need of 12013-07-04 01:21:50,677 ERROR org.apache.hadoop.security.UserGr