python机器学习(machine learning in python)-描述数据的基本特征

    机器学习,从数据中提取有助于决策的信息正在改变着现代的商业组织,同时对软件开发行业也产生了很深的影响。其重要的作用在于通过机器学习来做出预测,一般的我们比较关注获得广发使用的算法,这些算法中有很多是关于类似于“函数逼近”的问题,这是监督学习的一个子集,线性回归和逻辑回归是解决此类问题的最常见的解法,其应用与很多领域,如文本分类,搜索响应,立即邮件过滤等。在这里要讨论两种算法:线性惩罚回归算法和集成方法。

    在这里我们讨论一下构造一个机器学习过程的流程:

    当我们遇到一个新的问题时如何下手,首先需要去浏览数据,去发现那些数据可以用于预测,“浏览”即对各种数据进行统计分析,监测分析,以获得更加直观的感觉,更加直接地发现数据透露了什么样的信息。下一步,可能会考虑对特征进行调整,包括增加特征,或者是删除证明没有什么帮助的特征,或者选择另一种类型的训练目标,看看是否能够有新的发现。

     构造一个机器学习问题:

    通俗地说就是如何获取更好的结果,将这个作为我们需要优化的目标,由于构造一个机器学习的过程可以采用不同的方法,这就导致了问题的构造,模型的选择,模型的训练等的不同。

    特征提取和特征工程

    确定哪些属性,或者哪些参量可以用于构造模型,这个探求的过程就是特征提取和特征工程,特征提取就是爸各种自由形式的数据转换成行列形式的数字的过程,好比构造了一个表,列代表属性,行代表样本记录。特征工程就是对特征进行整合,这个过程将决定哪些特征可以用来构造模型。确立好特征之后,我们需要一个预测模型,评价它的性能,再决定是否部署此模型,通过特征以及特征的不同的组合来探求最佳的预测模型,数据的准备和数据的特征工程过程将会占整个机器学习模型过程的八成到酒成的时间。通常训练100-5000个模型,然后从中挑选出一个模型进行部署,原因是提供不同复杂度的模型这样可以挑选一个与问题,数据集等最佳匹配的模型,如果不想模型太简单,不想放弃性能,不想模型太复杂,也不想模型出现过拟合的现象,那么需要从不同的复杂度中挑选一个最合适的来代表最终的结果模型。

    确定训练后的模型的性能

    一个模型是否合适是由测试数据集测试之后的表现来决定的。这个概念上简单,但是却是很重要的一步,需要留下一部分数据,不用于训练,用于测试,用这部分数据来测试模型的性能。不同的方法主要依赖于不同的数据规模,并体现出不同的优势,除此之外还要检查数据的测试环境条件的变化,这种变化往往会导致潜在的一些统计特征的变化。

       通过理解数据来理解问题

 案例:岩石与水雷的分类问题,即使用声纳来发现未引爆的水雷的问题:

     描述数据的基本特征:

_author_ = 'Tianlong_Zhang'
import urllib2
import sys

# read data from a url data repository
target_url = ("https://archive.ics.uci.edu/ml/machine-learning-"
"databases/undocumented/connectionist-bench/sonar/sonar.all-data")

data = urllib2.urlopen(target_url)

#arrage data into list for labels and list for attributes

xList = []
labels = []
for line in data:
	#split on comma
	row = line.strip().split(",")
	xList.append(row)
	
sys.stdout.write("Number of Rows of Data = " + str(len(xList)) + '\n')
sys.stdout.write("Number of Columns of Data = " + str(len(xList[1]))+ '\n')
这里输入数据,并统计行列数:



其次可以探求数据集的统计特征:

__author__ = 'mike_bowles'
import urllib2
import sys
import numpy as np

#read data from uci data repository
target_url = ("https://archive.ics.uci.edu/ml/machine-learning-"
"databases/undocumented/connectionist-bench/sonar/sonar.all-data")

data = urllib2.urlopen(target_url)

#arrange data into list for labels and list of lists for attributes
xList = []
labels = []

for line in data:
    #split on comma
    row = line.strip().split(",")
    xList.append(row)
nrow = len(xList)
ncol = len(xList[1])

type = [0]*3
colCounts = []

#generate summary statistics for column 3 (e.g.)
col = 3
colData = []
for row in xList:
    colData.append(float(row[col]))

colArray = np.array(colData)
colMean = np.mean(colArray)
colsd = np.std(colArray)
sys.stdout.write("Mean = " + '\t' + str(colMean) + '\t\t' +
            "Standard Deviation = " + '\t ' + str(colsd) + "\n")


#calculate quantile boundaries
ntiles = 4

percentBdry = []

for i in range(ntiles+1):
    percentBdry.append(np.percentile(colArray, i*(100)/ntiles))

sys.stdout.write("\nBoundaries for 4 Equal Percentiles \n")
print(percentBdry)
sys.stdout.write(" \n")


#run again with 10 equal intervals
ntiles = 10

percentBdry = []

for i in range(ntiles+1):
    percentBdry.append(np.percentile(colArray, i*(100)/ntiles))

sys.stdout.write("Boundaries for 10 Equal Percentiles \n")
print(percentBdry)
sys.stdout.write(" \n")


#The last column contains categorical variables

col = 60
colData = []
for row in xList:
    colData.append(row[col])

unique = set(colData)
sys.stdout.write("Unique Label Values \n")
print(unique)

#count up the number of elements having each value

catDict = dict(zip(list(unique),range(len(unique))))

catCount = [0]*2

for elt in colData:
    catCount[catDict[elt]] += 1

sys.stdout.write("\nCounts for Each Value of Categorical Label \n")
print(list(unique))
print(catCount)

输出结果如下,可以看到



同时也可以查看数据的某个属性的四分位数图:

__author__ = 'ubuntu'
import numpy as np
import pylab
import scipy.stats as stats
import urllib2
import sys

target_url = ("https://archive.ics.uci.edu/ml/machine-learning-"
"databases/undocumented/connectionist-bench/sonar/sonar.all-data")

data = urllib2.urlopen(target_url)


#arrange data into list for labels and list of lists for attributes
xList = []
labels = []

for line in data:
    #split on comma
    row = line.strip().split(",")
    xList.append(row)
nrow = len(xList)
ncol = len(xList[1])

type = [0]*3
colCounts = []

#generate summary statistics for column 3 (e.g.)
col = 3
colData = []
for row in xList:
    colData.append(float(row[col]))


stats.probplot(colData, dist="norm", plot=pylab)
pylab.show()
数据展示如下:

  除此之外我们还可以对属性利用平行坐标图进行可视化展示,这个是根据属性的值画出来的,有时候画出图来之后类别之间可以很明显的区分,这样也能够利于属性值和标签(目标值)之间关系的显示。如下面的例子:

__author__ = 'mike_bowles'
import pandas as pd
from pandas import DataFrame
import matplotlib.pyplot as plot
target_url = ("https://archive.ics.uci.edu/ml/machine-learning-"
"databases/undocumented/connectionist-bench/sonar/sonar.all-data")

#read rocks versus mines data into pandas data frame
rocksVMines = pd.read_csv(target_url,header=None, prefix="V")

for i in range(208):
    #assign color based on color based on "M" or "R" labels
    if rocksVMines.iat[i,60] == "M":
        pcolor = "red"
    else:
        pcolor = "blue"

    #plot rows of data as if they were series data
    dataRow = rocksVMines.iloc[i,0:60]
    dataRow.plot(color=pcolor, alpha=0.5)

plot.xlabel("Attribute Index")
plot.ylabel(("Attribute Values"))
plot.show()
数据显示的结果如下:




当然我们也可以绘制数据的交绘图,也就是属性之间的交绘图,例如:可参考代码(有帮助的话打个星,corrPlot.py):


当然也可以通过热图来展示属性和标签之间的关系(sampleCorrHeatMap.py):


    上面仅仅是介绍了几种数据描述的方法,在实际的工程中,会有其它的方法,但方法的选择一方面要符合自己对数据探查的需求,另一方面还需要结合数据的特性,从而能够找到一种适合的方法来准确的描述数据之间的关系,这也将为后面的工作提供更好的先决条件。有了前面对数的判断之后,我们下一步将会涉及到预测模型的构建问题,需要处理模型的平衡性,复杂性以及大数据情况。

  • 1
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
Title: Machine Learning in Python: Essential Techniques for Predictive Analysis Author: Michael Bowles Length: 360 pages Edition: 1 Language: English Publisher: Wiley Publication Date: 2015-04-20 ISBN-10: 1118961749 ISBN-13: 9781118961742 Learn a simpler and more effective way to analyze data and predict outcomes with Python Machine Learning in Python shows you how to successfully analyze data using only two core machine learning algorithms, and how to apply them using Python. By focusing on two algorithm families that effectively predict outcomes, this book is able to provide full descriptions of the mechanisms at work, and the examples that illustrate the machinery with specific, hackable code. The algorithms are explained in simple terms with no complex math and applied using Python, with guidance on algorithm selection, data preparation, and using the trained models in practice. You will learn a core set of Python programming techniques, various methods of building predictive models, and how to measure the performance of each model to ensure that the right one is used. The chapters on penalized linear regression and ensemble methods dive deep into each of the algorithms, and you can use the sample code in the book to develop your own data analysis solutions. Machine learning algorithms are at the core of data analytics and visualization. In the past, these methods required a deep background in math and statistics, often in combination with the specialized R programming language. This book demonstrates how machine learning can be implemented using the more widely used and accessible Python programming language. * Predict outcomes using linear and ensemble algorithm families * Build predictive models that solve a range of simple and complex problems * Apply core machine learning algorithms using Python * Use sample code directly to build custom solutions Machine learning doesn't have to be complex and highly specialized. Python makes this technology more acces
Machine Learning in Python: Essential Techniques for Predictive Analysis Paperback: 360 pages Publisher: Wiley; 1 edition (April 27, 2015) Language: English ISBN-10: 1118961749 ISBN-13: 978-1118961742 Learn a simpler and more effective way to analyze data and predict outcomes with Python Machine Learning in Python shows you how to successfully analyze data using only two core machine learning algorithms, and how to apply them using Python. By focusing on two algorithm families that effectively predict outcomes, this book is able to provide full descriptions of the mechanisms at work, and the examples that illustrate the machinery with specific, hackable code. The algorithms are explained in simple terms with no complex math and applied using Python, with guidance on algorithm selection, data preparation, and using the trained models in practice. You will learn a core set of Python programming techniques, various methods of building predictive models, and how to measure the performance of each model to ensure that the right one is used. The chapters on penalized linear regression and ensemble methods dive deep into each of the algorithms, and you can use the sample code in the book to develop your own data analysis solutions. Machine learning algorithms are at the core of data analytics and visualization. In the past, these methods required a deep background in math and statistics, often in combination with the specialized R programming language. This book demonstrates how machine learning can be implemented using the more widely used and accessible Python programming language. * Predict outcomes using linear and ensemble algorithm families * Build predictive models that solve a range of simple and complex problems * Apply core machine learning algorithms using Python * Use sample code directly to build custom solutions Machine learning doesn't have to be complex and highly specialized. Python makes this technology more accessible to a much wider audience, using methods that are simpler, effective, and well tested. Machine Learning in Python shows you how to do this, without requiring an extensive background in math or statistics.
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值