1.压缩工程文件
sudo zip -r Project.zip.gz ./*
zip -r Project.zip ./*
2.配置PYTHONPATH,指向该目录
3.工程下创建配置文件conf.py文件
PROJECT_SOURCE=r'/usr/Project.zip'
2.代码引用外部模块(此代码可以嵌套在js, java,scala)
#从conf引用模块路径
from conf import PROJECT_SOURCE
sys.path.append(PROJECT_SOURCE)
#项目基目录,数据目录
from settings import BASE_DIR, DATA_DIR
引用压缩包的类
#获取要引用的类的full qualified url (全路径)
import_module = "project.subpackage.{0}".format(class_name)
#导入该类所在的module
module = importlib.import_module(import_module, BASE_DIR)
# 实例化该类
HandlerClass = getattr(module, class_name)
#读取类执行所需配置文件
filename = DATA_DIR + 'feature_filter/' + 'feature_filter.json'
#读入该配置文件
handler = HandlerClass(filename)
#执行该类的入口方法(类似main)
res = handler.execute(gai_ss.ss.sparkContext,gai_ss.ss )
4.执行程序
在项目子目录使用 zip -r gai_platform.zip * 进行打包
提交集群运行
bin/spark-submit --py-files gai_platform.zip gai_platform/gai_feature_project/data_preprocessing_dataframe/FeatureBucketizer.py --master yarn --deploy-mode cluster
bin/spark-submit --py-files gai_platform.zip gai_platform/gai_feature_project/data_preprocessing_dataframe/FeatureBucketizer.py --master yarn --deploy-mode client
C:\gai_platform\gai_feature_project\data_preprocessing_dataframe\HDFSInteraction.py
spark-submit --py-files hdfs://namenode-ai.geotmt.com:8020/user/dp/data/gai_platform.zip gai_platform/gai_feature_project/data_preprocessing_dataframe/HDFSInteraction.py --master local
bin/spark-submit \
main.py
--py-files 将main.py所需要的模块包,py文件都打包在一起(打成[ *.zip *.egg(第三模块(numpy,pandas))]本地文件)
执行该脚本地方地方df