CentOS配置源
1.查看本地yum源
ll /etc/yum.repos.d/
2.把默认yum源备份
mkdir /opt/centos-yum.bak
mv /etc/yum.repos.d/* /opt/centos-yum.bak/
3.查看系统的版本
cat /etc/redhat-release
4.下载对应的YUM源:
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
5.清除缓存
yum clean all
yum makecache
yum list
6.检测你的环境中是否有python,查看版本:
python -V
默认带的有python2.7.5,我们需要安装个python3.7.4,别的版本也可以
wget https://www.python.org/ftp/python/3.7.4/Python-3.7.4.tgz
下载完之后,命令ls查看当前目录下的文件,可以看到Python-3.7.4.tgz,我们需要解压这个文件
tar -zxf Python-3.7.4.tgz -C /opt/soft/
进入解压后的目录,进行编译,指定编译后生成文件的位置
cd soft/Python-3.7.4/
./configure --prefix=/usr/local/python3
7.安装python可能用到的包和依赖:
yum install openssl-devel bzip2-devel expat-devel gdbm-devel readline-devel sqlite-devel
yum install gcc
yum - y install zlib*
yum install libffi-devel -y
8.安装python:
make && make install
9.安装完成设置软连接:
ln -s /usr/local/python3/bin/python3 /usr/bin/python3
ln -s /usr/local/python3/bin/pip3 /usr/bin/pip3
10.查看版本:
python3 -V
pip3 -V
安装spark
1.解压缩重命名
tar -zxf spark-2.4.4-bin-hadoop2.6.tgz -C /opt/soft
mv spark-2.4.4-bin-hadoop2.6 spark244
2.配置spark/conf下的spark-env.sh和sbin下的spark-config.sh文件:
cp spark-env.sh.template spark-env.sh
vi spark-env.sh
export SPARK_MASTER_HOST=192.168.181.132 #主节点IP
export SPARK_MASTER_PORT=7077 #任务提交端口
export SPARK_WORKER_CORES=2 #每个worker使用2核
export SPARK_WORKER_MEMORY=2g #每个worker使用3g内存
export SPARK_MASTER_WEBUI_PORT=8888 #修改spark监视窗口的端口默认8080
vi spark-config.sh
export JAVA_HOME=/opt/soft/jdk180
配置并激活spark环境变量:无需配置PATH
#spark
export SPARK_HOME=/opt/soft/spark244
source /etc/profile
设置pip豆瓣源
root目录下新建.pip文件夹:然后新建文件pip.conf
#豆瓣源,可以换成其他的源
index-url = https://pypi.douban.com/simple
#添加豆瓣源为可信主机,要不然可能报错
trusted-host = pypi.douban.com
Python代码:
import findspark
findspark.init()
from pyspark.sql import SparkSession
from pyspark.ml.clustering import KMeans
from pyspark.sql.types import DoubleType
from pyspark.sql.functions import col
from pyspark.ml.feature import VectorAssembler
if __name__ == '__main__':
spark = SparkSession.builder.master("local[8]").config("spark.debug.maxToStringFields","120").config("spark.executor.memory", "3g")\
.appName("mymodel").getOrCreate()
df = spark.read.format("CSV").option("header","true")\
.load("hdfs://192.168.181.132:9000/events/data/events.csv")
cols = [c for c in df.columns if c.startswith("c_")]
feas = cols.copy()
cols.insert(0,"event_id")
df1 = df.select([col(c).cast(DoubleType()) for c in cols])
#可以将多列合成一列,但是输入的数据必须不能是str或者float
va = VectorAssembler().setInputCols(feas).setOutputCol("features")
res = va.transform(df1).select("event_id", "features")
model = KMeans().setK(35).setFeaturesCol("features").setPredictionCol("predict").fit(res)
r= model.transform(res).select(col("event_id").alias("eventid"), col("predict").alias("eventtype"))
r.coalesce(1).write.option("sep", ",").option("header", "true").csv("hdfs://192.168.181.132:9000/events/eventtype",
mode="overwrite")
spark.stop()
把上面python的代码文件myps.py放到lunix的一个路径下,我的是在/opt下
导包:
pip3 install findspark
pip3 install numpy
然后运行改文件
python3 /opt/myps.py
查看hdfs上的文件,验证代码是否执行成功
hdfs dfs -cat /events/eventtype/part-00000-96155969-937f-481e-a8c0-255488d96433-c000.csv|wc -l