总结下这段时间的收获
分为三部分
linux
echarts
数据挖掘
最近做了个课设,要给adult数据集建立合适的分类器,并对分类器评估
分享一下成果,有好多问题完成后才知道,仅供和我一样的初学者参考
adult数据集,从uci官网给的信息
Data Set Characteristics:
Multivariate |
Number of Instances: |
48842 |
Area: |
Social |
|
Attribute Characteristics: |
Categorical, Integer |
Number of Attributes: |
14 |
Date Donated |
1996-05-01 |
Associated Tasks: |
Classification |
Missing Values? |
Yes |
Number of Web Hits: |
913044 |
>50K, <=50K.
age: continuous.
workclass: Private, Self-emp-not-inc, Self-emp-inc, Federal-gov, Local-gov, State-gov, Without-pay, Never-worked.
fnlwgt: continuous.
education: Bachelors, Some-college, 11th, HS-grad, Prof-school, Assoc-acdm, Assoc-voc, 9th, 7th-8th, 12th, Masters, 1st-4th, 10th, Doctorate, 5th-6th, Preschool.
education-num: continuous.
marital-status: Married-civ-spouse, Divorced, Never-married, Separated, Widowed, Married-spouse-absent, Married-AF-spouse.
occupation: Tech-support, Craft-repair, Other-service, Sales, Exec-managerial, Prof-specialty, Handlers-cleaners, Machine-op-inspct, Adm-clerical, Farming-fishing, Transport-moving, Priv-house-serv, Protective-serv, Armed-Forces.
relationship: Wife, Own-child, Husband, Not-in-family, Other-relative, Unmarried.
race: White, Asian-Pac-Islander, Amer-Indian-Eskimo, Other, Black.
sex: Female, Male.
capital-gain: continuous.
capital-loss: continuous.
hours-per-week: continuous.
native-country: United-States, Cambodia, England, Puerto-Rico, Canada, Germany, Outlying-US(Guam-USVI-etc), India, Japan, Greece, South, China, Cuba, Iran, Honduras, Philippines, Italy, Poland, Jamaica, Vietnam, Mexico, Portugal, Ireland, France, Dominican-Republic, Laos, Ecuador, Taiwan, Haiti, Columbia, Hungary, Guatemala, Nicaragua, Scotland, Thailand, Yugoslavia, El-Salvador, Trinadad&Tobago, Peru, Hong, Holand-Netherlands.
了解到这个数据集各属性是这个样子的
字段名 |
含义 |
类型 |
age |
年龄 |
Double |
workclass |
工作类型* |
string |
fnlwgt |
序号 |
string |
education |
教育程度* |
string |
education_num |
受教育时间 |
double |
maritial_status |
婚姻状况* |
string |
occupation |
职业* |
string |
relationship |
关系* |
string |
race |
种族* |
string |
sex |
性别* |
string |
capital_gain |
资本收益 |
string |
capital_loss |
资本损失 |
string |
hours_per_week |
每周工作小时数 |
double |
native_country |
原籍* |
string |
income |
收入 |
string |
且有残缺的数据,用?表示
首先要进行数据清理,用linux的awk进行了对残缺数据的查找和处理
#!/bin/sh
infile=$1
outfile=$2
awk -F ", "'BEGIN{id=1;num=1;num2=1}{
tf=1
for(i=1;i<=14;i++){
if($i=="?"){
tf=0
printid","i,","$i","num","num2
num=num+1
}
}
id=id+1
if(tf){
#print$1"\t"$2\t"$3\t"$4\t"$5\t"$6\t"$7\t"$8\t"$9\t"$10\t"$11\t"$12\t"$13\t"$14\t"$15
}else{
num2=num2+1
}
}' $infile > $outfile
sum1统计?出现次数,sum2统计有多少行存在?
统计出num1=4262,num2=2399
我们拿到的数据集数据量和官网给出的不一样,应该是老版本的原因
考虑到样本数量基数大有32561行,缺失样本数据只占7%相对较小,所以采用剔除缺失数据的处理方法
#!/bin/sh
infile=$1
outfile=$2
awk -F ", " '{
tf=1
for(i=1;i<=14;i++){
if($i=="?"){
tf=0
}
}
if(tf){
print $1"\t"$2\t"$3\t"$4\t"$5\t"$6\t"$7\t"$8\t"$9\t"$10\t"$11\t"$12\t"$13\t"$14\t"$15
}
}' $infile > $outfile