1、/etc/group文件包含所有组,/etc/shadow和/etc/passwd系统存在的所有用户名
su – groupadd hadoop useradd -s /bin/bash -m hadoop passwd hadoop
2、
(1)输入visudo
(2)添加hadoop ALL=(ALL:ALL) ALL,
(3)^X,退出,保存文件至/etc/sudoers
(4)相关知识如下:
visudo打开后,下面不是有文字说明的,^X,这种样子的。
^表示Ctrl
讲解:root ALL=(ALL) ALL
root表示被授权的用户,这里是根用户;
第一个ALL表示所有计算机;
第二个ALL表示所有用户;
第三个ALL表示所有命令;
全句的意思是:授权根用户在所有计算机上以所有用户的身份运行所有文件。
%admin ALL=(ALL) ALL同上面一样,只不过被授权的成了admin这个组。
3、安装jdk,配置如下:
vim /etc/profile
追加
#set java hadoop hive JAVA_HOME=/usr/local/jdk JRE_HOME=/usr/local/jdk/jre HADOOP_HOME=/usr/local/hadoop HIVE_HOME=/usr/local/hive CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib:$CLASSPATH PATH=.:$JAVA_HOME/bin:$JRE_HOME/bin:$HADOOP_HOME/bin:$HIVE_HOME/bin:$PATH export JAVA_HOME export JRE_HOME export HADOOP_HOME export HIVE_HOME export CLASSPATH export PATH
source /etc/profile
4、配置hosts和hostname
vim /etc/hostname,改为namenode(datanode改为datanode的名字)
每一台机都需要配置这个
vim /etc/hosts 192.168.110.130 master 192.168.110.131 slave1 192.168.110.132 slave2
5、apt-get install openssh-server
namenode操作如下:
ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys scp .ssh/authorized_keys slave1:~/.ssh/authorized_keys scp .ssh/authorized_keys slave2:~/.ssh/authorized_keys
6、
chown -fR hadoop:hadoop /usr/local/hadoop-1.0.3 chown -fR hadoop:hadoop /usr/local/hadoop vim hadoop-env.sh export JAVA_HOME=/usr/local/jdk export HADOOP_HOME=/usr/local/hadoop export HADOOP_HOME_WARN_SUPPRESS=TRUE vim core-site.xml <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>fs.default.name</name> <value>hdfs://master:9000</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/home/hadoop/tmp</value> </property> </configuration> vim hdfs-site.xml <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>dfs.replication</name> <value>2</value> </property> <property> <name>dfs.permissions</name> <value>false</value> </property> </configuration> vim mapred-site.xml <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>mapred.job.tracker</name> <value>master:9001</value> </property> </configuration> vim masters master vim slaves slave1 slave2
7、
hadoop namenode -format
start-all.sh
hadoop dfsadmin –report
http://master:50070
8、hive
(1)apt-get install mysql-server-5.5
(2)创建 mysql 用户
mysql> CREATE USER 'hive'@'localhost' IDENTIFIED BY '123456789';
mysql> GRANT ALL PRIVILEGES ON *.* TO 'hive'@'localhost' WITH GRANT OPTION;
(3)将mysql jdbc driver jar放在 $HIVE_HOME/lib/ 下面
(4)vim hive-site.xml
<configuration> <property> <name>hive.exec.scratchdir</name> <value>/tmp/hive-${user.name}</value> <description>Scratch space for Hive jobs</description> </property> <property> <name>hive.metastore.warehouse.dir</name> <value>/user/hive/warehouse</value> <description>location of default database for the warehouse</description> </property> <property> <name>hive.metastore.local</name> <value>true</value> <description>controls whether to connect to remove metastore server or open a new metastore server in Hive Client JVM</description> </property> <property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://localhost:3306/hive?createDatabaseIfNotExist=true</value> <description>JDBC connect string for a JDBC metastore</description> </property> <property> <name>javax.jdo.option.ConnectionDriverName</name> <value>com.mysql.jdbc.Driver</value> <description>Driver class name for a JDBC metastore</description> </property> <property> <name>javax.jdo.option.ConnectionUserName</name> <value>hive</value> <description>username to use against metastore database</description> </property> <property> <name>javax.jdo.option.ConnectionPassword</name> <value>000</value> <description>password to use against metastore database</description> </property> </configuration>