在ubuntu中安装pyhs2,用于使用python访问hive
使用pip安装pyhs2之前,需要先安装python-dev 、 libsasl2-dev和gcc :
apt-get install python-dev libsasl2-dev gcc
在CentOS/RHEL中使用yum(见pyhs2的github主页wiki):
yum install gcc-c++ python-devel.x86_64 cyrus-sasl-devel.x86_64
远程数据写入HDFS/Hive方案:
1.
1) copy the file to the master node's local disk:
scp test.txt username@masternode:/folderName/
I have already setup SSH connection using keys. So no password is needed to do this.
2) I can use ssh to remotely execute the hadoop put command:
ssh username@masternode "hadoop dfs -put /folderName/test.txt hadoopFolderName/"
2.
Try this (untested):
cat test.txt | ssh username@masternode "hadoop dfs -put - hadoopFoldername/"
I've used similar tricks to copy directories around:
tar cf - . | ssh remote "(cd /destination && tar xvf -)"
This sends the output of local-tar into the input of remote-tar.
3.
The node where you have generated the data on, is this able to reach each of your cluster nodes (the name node and all the datanodes).
If you do have data connectivity then you can just execute the hadoop fs -put command from the machine where the data is generated (assuming you have the hadoop binaries installed there too):
#> hadoop fs -fs masternode:8020 -put test.bin hadoopFolderName/
4.
Hadoop provides a couple of REST interfaces. Check Hoop and WebHDFS. You should be able to copy the file without copying the file to the master using them from non-Hadoop environments
5.
sqoop