tf从hdfs 直接获取数据训练模型

背景

  • 训练集数据集比较大,目前train 集合已经27G,每次占用的磁盘空间较大
  • getmerge 时间太长,15分钟用来获取训练集、测试集、评估集

操作

环境tensorflow1.13 ,python2.7

获取hdfs 的文件目录

  • 此处主要参考了:https://blog.csdn.net/liukanglucky/article/details/102952686
import commands
import re
import tensorflow as tf
def get_file_list( root_path,path_pattern=[]):
    """
    生成hdfs file list
    :param path_pattern:
    :param root_path
    :return:
    """
    cmd = """hadoop fs -ls -R {}""".format(root_path.strip())
    if len(path_pattern) == 0:
        pattern = "|".join(["(" + str(p.replace('/', '\/')) + ")" for p in path_pattern])
    else:
        pattern = ""
    # 筛选文件
    def validate_path_pattern(path):
        if pattern != "" and re.search(pattern, path) and '_SUCCESS' not in path:
            return True
        elif pattern == "" and '_SUCCESS' not in path:
            return True
        else:
            return False
    status, output = commands.getstatusoutput(cmd)
    output = output.split('\n')
    output = list(filter(validate_path_pattern, output))
    file_list = list()
    polluted = any(len(info.split()) != 8 for info in output)
    if status == 0 and len(output) > 0 and not polluted:
        file_list = ["hdfs://nn-cluster" +info.split()[-1] for info in output if info[0] == '-']
    return file_list

input_fn

def input_fn(data_file_lst, num_epochs, shuffle, batch_size):
    """Generate an input function for the Estimator."""
    """
       data_file_lst:hdfs 文件列表
    """

    def parse_csv(value):
        # tf.logging.info('Parsing {}'.format(data_file_lst))
        columns = tf.decode_csv(value, record_defaults=_CSV_COLUMN_DEFAULTS, field_delim='\t')
        features = dict(zip(_CSV_COLUMNS, columns))
        labels = features.pop('label')
        # classes = tf.equal(labels, 1)
        return features, labels

    # Extract lines from input files using the Dataset API.
    dataset = tf.data.TextLineDataset(data_file_lst)

    if shuffle:
        dataset = dataset.shuffle(buffer_size=_NUM_EXAMPLES['train'])
    dataset = dataset.repeat(num_epochs)
    dataset = dataset.batch(batch_size).map(parse_csv, num_parallel_calls=32)
    dataset = dataset.prefetch(batch_size)
    return dataset

shell 中配置

#!/bin/bash

export HADOOP_PREFIX=/usr/local/matrix/
export HADOOP_HOME=$HADOOP_PREFIX
export CLASSPATH=$($HADOOP_PREFIX/bin/hadoop classpath --glob)
export  LD_LIBRARY_PATH=/usr/local/matrix/bin:/usr/local/matrix/lib/native:/usr/local/cuda-8.0/jre/lib/amd64/server:$LD_LIBRARY_PATH

需要额外关注的是:从hdfs 训练,需要在保存数据之前打乱顺序

import subprocess
from  pyspark import SparkContext
from pyspark.sql import HiveContext
import math
import random
from pyspark.sql import functions as F

def change_str(s):
    lst = list(s)
    for i in range(len(lst)):
        lst[i]=str(lst[i])
    return "\t".join(lst)


def save_df(df,save_path):
    if 0 == subprocess.call(["hadoop","dfs","-test","-e",save_path]):
        subprocess.call(["hadoop","dfs","-rm","-r",save_path])
    rdd = df.rdd.map(lambda s:change_str(s))
    print "after change str:{}".format(rdd.first())
    rdd.saveAsTextFile(save_path)
    print "save success"

def df_random_shuffle_save(df,save_path):
    col_name = df.columns
    df = df.withColumn("random_num",F.rand(seed=200))
    df = df.orderBy(df.random_num)
    df = df.select(col_name)
    save_df(df,save_path)

与本地相比,训练运行耗时 增长6%,AUC训练效果不受影响

注意

  • 遇到的主要问题:本地运行时,报错libhdfs.so ,libjvm.so 文件找不到,以及读取数据报错。解决方案就是找到so文件存储的位置,加到LD_LIBRARY_PATH中并在CLASSPATH 中加入hadoop 路径
  • hdfs 路径需要按照自己的实际情况填写:hdfs://host:port/path
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值