什么是随机森林
随机 森林 是 几乎 任何 预测 问题 (甚至 非直线 部分) 的固有 选择 。 它是 一个 相对较 新 的 机器学习 的 策略 ( 在 90 年代产生于 贝尔 实验室 ) 和 它 可以 几乎用于 任何方面 。 它 属于 机器 学习 算法 一大类----- 集成学习 方法 。
集成学习
集成学习通过建立几个模型组合的来解决单一预测问题。它的工作原理是生成多个分类器/模型,各自独立地学习和作出预测。这些预测最后结合成单 预测,因此优于任何一个单分类的做出预测。
过程
在建立每一棵决策树的过程中,有两点需要注意 - 采样与完全分裂。首先是两个随机采样的过程,random forest对输入的数据要进行行、列的采样。对于行采样,采用有放回的方式,也就是在采样得到的样本集合中,可能有重复的样本。假设输入样本为N个,那么采样的样本也为N个。这样使得在训练的时候,每一棵树的输入样本都不是全部的样本,使得相对不容易出现over-fitting。然后进行列采样,从M个feature中,选择m个(m << M)。之后就是对采样之后的数据使用完全分裂的方式建立出决策树,这样决策树的某一个叶子节点要么是无法继续分裂的,要么里面的所有样本的都是指向的同一个分类。一般很多的决策树算法都一个重要的步骤 - 剪枝,但是这里不这样干,由于之前的两个随机采样的过程保证了随机性,所以就算不剪枝,也不会出现over-fitting。
按这种算法得到的随机森林中的每一棵都是很弱的,但是大家组合起来就很厉害了。我觉得可以这样比喻随机森林算法:每一棵决策树就是一个精通于某一个窄领域的专家(因为我们从M个feature中选择m让每一棵决策树进行学习),这样在随机森林中就有了很多个精通不同领域的专家,对一个新的问题(新的输入数据),可以用不同的角度去看待它,最终由各个专家,投票得到结果。
优点
-
在数据集上表现良好
-
在当前的很多数据集上,相对其他算法有着很大的优势
-
它能够处理很高维度(feature很多)的数据,并且不用做特征选择
-
在训练完后,它能够给出哪些feature比较重要
-
在创建随机森林的时候,对generlization error使用的是无偏估计
-
训练速度快
-
在训练过程中,能够检测到feature间的互相影响
-
容易做成并行化方法
-
实现比较简单
用python的sklearn包实现对kaggle上面泰坦尼克号的生存预测
#coding:utf-8
import pandas as pd
import numpy as np
import csv as csv
from sklearn.ensemble import RandomForestClassifier
train_df = pd.read_csv(r'D:\PythonDDD\shuju files\tantanic\train.csv', header=0) #没有第一条
train_df['Gender'] = train_df['Sex'].map({'female': 0, 'male': 1}).astype(int) #转化布尔值,astype实现变量类型转换
if len(train_df.Embarked[train_df.Embarked.isnull()]) > 0:#ISNULL 使用指定的替换值替换 NULL
train_df.Embarked[train_df.Embarked.isnull()] = train_df.Embarked.dropna().mode().values#用mode众数替换空值
Ports = list(enumerate(np.unique(train_df['Embarked'])))#找出里面的第一个唯一值和他的索引,枚举变量,放进一个列表
print Ports
Ports_dict = {name: i for i, name in Ports}#前面函数,后面范围,把每种分类对应起来数值
# print Ports_dict # set up a dictionary in the form Ports : index
train_df.Embarked = train_df.Embarked.map(lambda x: Ports_dict[x]).astype(int)#索引,用数值代表字符串对应起来
median_age = train_df['Age'].dropna().median()#中值
if len(train_df.Age[ train_df.Age.isnull()]) > 0:
train_df.loc[ (train_df.Age.isnull()), 'Age'] = median_age#年龄中位数
train_df = train_df.drop(['Name', 'Sex', 'Ticket', 'Cabin', 'PassengerId'], axis=1)#去除不必要的属性
test_df = pd.read_csv(r'D:\PythonDDD\shuju files\tantanic\test.csv', header=0) # Load the test file into a dataframe
test_df['Gender'] = test_df['Sex'].map({'female': 0, 'male': 1}).astype(int)
if len(test_df.Embarked[test_df.Embarked.isnull()]) > 0:
test_df.Embarked[test_df.Embarked.isnull()] = test_df.Embarked.dropna().mode().values
test_df.Embarked = test_df.Embarked.map(lambda x: Ports_dict[x]).astype(int)
median_age = test_df['Age'].dropna().median()
if len(test_df.Age[test_df.Age.isnull()]) > 0:
test_df.loc[(test_df.Age.isnull()), 'Age'] = median_age
if len(test_df.Fare[test_df.Fare.isnull()]) > 0:
median_fare = np.zeros(3)#0矩阵
for f in range(0, 3): # loop 0 to 2
median_fare[f] = test_df[ test_df.Pclass == f+1 ]['Fare'].dropna().median()
for f in range(0, 3): # loop 0 to 2
test_df.loc[(test_df.Fare.isnull()) & (test_df.Pclass == f+1 ), 'Fare'] = median_fare[f]
ids = test_df['PassengerId'].values
test_df = test_df.drop(['Name', 'Sex', 'Ticket', 'Cabin', 'PassengerId'], axis=1)
train_data = train_df.values
test_data = test_df.values
print 'Training...'
forest = RandomForestClassifier(n_estimators=100)
# print train_data[0::,1:]
forest = forest.fit(train_data[0::, 1::], train_data[0::, 0])#特征和标签
print 'Predicting...'
# print train_data[0::, 1::]
output = forest.predict(test_data).astype(int)
predictions_file = open("D:\PythonDDD\shuju files\tantanic\myfirstforest.csv", "wb")
open_file_object = csv.writer(predictions_file)
open_file_object.writerow(["PassengerId", "Survived"])
open_file_object.writerows(zip(ids, output))
predictions_file.close()
python实现随机森林原理
#coding:utf-8
import csv
from random import seed
from random import randrange
from math import sqrt
def loadCSV(filename):#加载数据,一行行的存入列表
dataSet = []
with open(filename, 'r') as file:
csvReader = csv.reader(file)
for line in csvReader:
dataSet.append(line)
return dataSet
# 除了标签列,其他列都转换为float类型
def column_to_float(dataSet):
featLen = len(dataSet[0]) - 1
for data in dataSet:
for column in range(featLen):
data[column] = float(data[column].strip())
# 将数据集随机分成N块,方便交叉验证,其中一块是测试集,其他四块是训练集
def spiltDataSet(dataSet, n_folds):
fold_size = int(len(dataSet) / n_folds)
dataSet_copy = list(dataSet)
dataSet_spilt = []
for i in range(n_folds):
fold = []
while len(fold) < fold_size: # 这里不能用if,if只是在第一次判断时起作用,while执行循环,直到条件不成立
index = randrange(len(dataSet_copy))
fold.append(dataSet_copy.pop(index)) # pop() 函数用于移除列表中的一个元素(默认最后一个元素),并且返回该元素的值。
dataSet_spilt.append(fold)
return dataSet_spilt
# 构造数据子集
def get_subsample(dataSet, ratio):
subdataSet = []
lenSubdata = round(len(dataSet) * ratio)#返回浮点数
while len(subdataSet) < lenSubdata:
index = randrange(len(dataSet) - 1)
subdataSet.append(dataSet[index])
# print len(subdataSet)
return subdataSet
# 分割数据集
def data_spilt(dataSet, index, value):
left = []
right = []
for row in dataSet:
if row[index] < value:
left.append(row)
else:
right.append(row)
return left, right
# 计算分割代价
def spilt_loss(left, right, class_values):
loss = 0.0
for class_value in class_values:
left_size = len(left)
if left_size != 0: # 防止除数为零
prop = [row[-1] for row in left].count(class_value) / float(left_size)
loss += (prop * (1.0 - prop))
right_size = len(right)
if right_size != 0:
prop = [row[-1] for row in right].count(class_value) / float(right_size)
loss += (prop * (1.0 - prop))
return loss
# 选取任意的n个特征,在这n个特征中,选取分割时的最优特征
def get_best_spilt(dataSet, n_features):
features = []
class_values = list(set(row[-1] for row in dataSet))
b_index, b_value, b_loss, b_left, b_right = 999, 999, 999, None, None
while len(features) < n_features:
index = randrange(len(dataSet[0]) - 1)
if index not in features:
features.append(index)
# print 'features:',features
for index in features:#找到列的最适合做节点的索引,(损失最小)
for row in dataSet:
left, right = data_spilt(dataSet, index, row[index])#以它为节点的,左右分支
loss = spilt_loss(left, right, class_values)
if loss < b_loss:#寻找最小分割代价
b_index, b_value, b_loss, b_left, b_right = index, row[index], loss, left, right
# print b_loss
# print type(b_index)
return {'index': b_index, 'value': b_value, 'left': b_left, 'right': b_right}
# 决定输出标签
def decide_label(data):
output = [row[-1] for row in data]
return max(set(output), key=output.count)
# 子分割,不断地构建叶节点的过程对对对
def sub_spilt(root, n_features, max_depth, min_size, depth):
left = root['left']
# print left
right = root['right']
del (root['left'])
del (root['right'])
# print depth
if not left or not right:
root['left'] = root['right'] = decide_label(left + right)
# print 'testing'
return
if depth > max_depth:
root['left'] = decide_label(left)
root['right'] = decide_label(right)
return
if len(left) < min_size:
root['left'] = decide_label(left)
else:
root['left'] = get_best_spilt(left, n_features)
# print 'testing_left'
sub_spilt(root['left'], n_features, max_depth, min_size, depth + 1)
if len(right) < min_size:
root['right'] = decide_label(right)
else:
root['right'] = get_best_spilt(right, n_features)
# print 'testing_right'
sub_spilt(root['right'], n_features, max_depth, min_size, depth + 1)
# 构造决策树
def build_tree(dataSet, n_features, max_depth, min_size):
root = get_best_spilt(dataSet, n_features)
sub_spilt(root, n_features, max_depth, min_size, 1)
return root
# 预测测试集结果
def predict(tree, row):
predictions = []
if row[tree['index']] < tree['value']:
if isinstance(tree['left'], dict):
return predict(tree['left'], row)
else:
return tree['left']
else:
if isinstance(tree['right'], dict):
return predict(tree['right'], row)
else:
return tree['right']
# predictions=set(predictions)
def bagging_predict(trees, row):
predictions = [predict(tree, row) for tree in trees]
return max(set(predictions), key=predictions.count)
# 创建随机森林
def random_forest(train, test, ratio, n_feature, max_depth, min_size, n_trees):
trees = []
for i in range(n_trees):
train = get_subsample(train, ratio)#从切割的数据集中选取子集
tree = build_tree(train, n_features, max_depth, min_size)
# print 'tree %d: '%i,tree
trees.append(tree)
# predict_values = [predict(trees,row) for row in test]
predict_values = [bagging_predict(trees, row) for row in test]
return predict_values
# 计算准确率
def accuracy(predict_values, actual):
correct = 0
for i in range(len(actual)):
if actual[i] == predict_values[i]:
correct += 1
return correct / float(len(actual))
if __name__ == '__main__':
seed(1)
dataSet = loadCSV('sonar-all-data.csv')
column_to_float(dataSet)
n_folds = 5
max_depth = 15
min_size = 1
ratio = 1.0
# n_features=sqrt(len(dataSet)-1)
n_features = 15
n_trees = 10
folds = spiltDataSet(dataSet, n_folds)#先是切割数据集
scores = []
for fold in folds:
train_set = folds[
:] # 此处不能简单地用train_set=folds,这样用属于引用,那么当train_set的值改变的时候,folds的值也会改变,所以要用复制的形式。(L[:])能够复制序列,D.copy() 能够复制字典,list能够生成拷贝 list(L)
train_set.remove(fold)#选好训练集
# print len(folds)
train_set = sum(train_set, []) # 将多个fold列表组合成一个train_set列表
# print len(train_set)
test_set = []
for row in fold:
row_copy = list(row)
row_copy[-1] = None
test_set.append(row_copy)
# for row in test_set:
# print row[-1]
actual = [row[-1] for row in fold]
predict_values = random_forest(train_set, test_set, ratio, n_features, max_depth, min_size, n_trees)
accur = accuracy(predict_values, actual)
scores.append(accur)
print ('Trees is %d' % n_trees)
print ('scores:%s' % scores)
print ('mean score:%s' % (sum(scores) / float(len(scores))))