1. instead of using root, add a separate user and group to install hadoop
when I was using root to install hadoop, I ran into problem not being able to start HDFS. The erro messag was something like:
a. unrecognized argument: -jvm
b. error message is cannot create vitrual machine
After I created another user "hadoop" and then installed the hadoop, this problem disapeared.
2. make sure your hostname of your master is not aliased as 127.0.0.1
Mostly, the /etc/hosts file of your master node will aliased the "hostname" as 127.0.0.1, please remove it. Just keep the localhost part.
After that, the /etc/hosts file should look like this:
127.0.0.1 localhost.localdomain localhost ::1 localhost6.localdomain6 localhost6 10.130.237.8 gonro07-VM18102.ca.com gonro07-VM18102 |
3. Use the same "conf" dir across the cluster
we can use rsync command to accomplish this task:
for a in `sort -u /home/hadoop/hadoop/conf/{slaves,masters}` ; do rsync -e ssh -v -a --include 'conf/*' "/home/hadoop/hadoop/" ${a}:"/home/hadoop/hadoop"; done |