因为项目需要,需要搭建hadoop集群,规模在40台左右,并且需要在机器上挂载另外一个盘,使得hadoop的数据盘放在第二快盘而不是在系统盘。
对于部分已经装好系统以及hadoop但没有打补丁的机器,只需重新打补丁然后重新编译即可。具体的:进入2.6.28.10内核,然后打INCAST,然后
make clean;make -j4;make modules_install;make install;mkinitramfs -o /boot/initrd.img-2.6.28.10;update-grub;
这里有个小问题,如果你一开始把menu.list里面2.6.28.10先给删了,在编译后的update-grub命令不会对menu.list进行修改,后来我重新在/usr/src/linux-2.6.28.10中输入了make install命令,再用update-grub弹出一个窗口,默认的还是不进行修改,将选项移到修改的那项点确认,然后查看menu.list,将需要的放在第一位,重启即可。
hadoop在重新编译内核后不需要重新安装。
挂载新硬盘:原本我想的比较简单,直接mount到某目录就行了,但是实际过程中出现了这个问题:
实际解决问题很简单,格式化硬盘即可。
创建硬盘信息
fdisk /dev/sdb
Command (m for help): m
Command action
a toggle a bootable flag
b edit bsd disklabel
c toggle the dos compatibility flag
d delete a partition
l list known partition types
m print this menu
n add a new partition
o create a new empty DOS partition table
p print the partition table
q quit without saving changes
s create a new empty Sun disklabel
t change a partition's system id
u change display/entry units
v verify the partition table
w write table to disk and exit
x extra functionality (experts only)
Command (m for help): n 添加分区
Command action
e extended
p primary partition (1-4)
p 添加主分区
Partition number (1-4): 1
First cylinder (1-2610, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-2610, default 2610):
Using default value 2610
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
格式化硬盘
[root@lmap ~]# mkfs -t ext2 /dev/sdb1
mke2fs 1.39 (29-May-2006)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
2621440 inodes, 5241198 blocks
262059 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=0
160 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 29 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
创建mount目录
mkdir -p /mnt/tmphadoop
mout到指定的目录
mount /dev/sdb1 /mnt/tmphadoop
修改/etc/rc.local文件,每次开机自动加载
vi /etc/rc.local
mount /dev/sdb1 /mnt/tmphadoop
查看
[root@lmap ~]# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
18156292 2574216 14644912 15% /
/dev/sda1 101086 18382 77485 20% /boot
tmpfs 517552 0 517552 0% /dev/shm
/dev/sdb1 20635700 176200 19411264 1% /mnt/tmphadoop
sudo chown -R hadoop:hadoop /mnt/tmphadoop/
最后通过上述命令将所有权限交给hadoop用户(不然hadoop启动时会提示/mnt/tmphadoop文件夹错误)