数据加载
创建日期 星期五 01 三月 2019
这一章写一下数据加载的东西
一般就是python加载或者jiava,scala加载
java加载都很熟悉,就是流加载inputstream outputstream之类的
python加载
普通加载
f = open(’test.txt','r') # 返回一个文件对象
line = f.readline() # 调用文件的 readline()方法
while line:
print line
line = f.readline()
f.close()
或者
for line in open("test.txt"):
print( line)
或者
f = open("test.txt","r")
lines = f.readlines() #读取全部内容 ,并以列表方式返回
for line in lines:
print(line)
numpy加载文件
加载
import numpy as np
from sklearn.model_selection import train_test_split
rows = np.loadtxt(open("xxxxx.csv"),delimiter=",",skiprows=0)
X, y = my_matrix[:,:-1],my_matrix[:,-1]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=2019)
train_data= np.column_stack((X_train,y_train))
numpy.savetxt('train_usual.csv',train_data, delimiter = ',')
test_data = np.column_stack((X_test, y_test))
numpy.savetxt('test_usual.csv', test_data, delimiter = ',')
pd和sklearn加载文件
import pandas as pd
from sklearn.model_selection import train_test_split
data = pd.read_csv("test.csv")
x,y = data.ix[:,1:],data.ix[:,0]
x_train,x_test,y_train,y_test = train_test_split(x,y,test_size=0.3,random_state=2019)
print(len(x_train))
print(len(x_test))
spark
直接通过表加载
或者
val df=spark.read.csv("test.csv")
df.randomSplit(Array(0.7, 0.3))