Missing Value Treatment

421 篇文章 14 订阅

Missing values in data is a common phenomenon in real world problems. Knowing how to handle missing values effectively is a required step to reduce bias and to produce powerful models. Lets explore various options of how to deal with missing values and how to implement them.

Data prep and pattern

Lets use the BostonHousing dataset in mlbench package to discuss the various approaches to treating missing values. Though the original BostonHousing data doesn’t have missing values, I am going to randomly introduce missing values. This way, we can validate the imputed missing values against the actuals, so that we know how effective are the approaches in reproducing the actual data. Lets begin by importing the data from mlbench pkg and randomly insert missing values (NA).

# initialize the data
data ("BostonHousing", package="mlbench")
original <- BostonHousing  # backup original data

# Introduce missing values
set.seed(100)
BostonHousing[sample(1:nrow(BostonHousing), 40), "rad"] <- NA
BostonHousing[sample(1:nrow(BostonHousing), 40), "ptratio"]       

#>      crim zn indus chas   nox    rm  age    dis rad tax ptratio      b lstat medv
#> 1 0.00632 18  2.31    0 0.538 6.575 65.2 4.0900   1 296    15.3 396.90  4.98 24.0
#> 2 0.02731  0  7.07    0 0.469 6.421 78.9 4.9671   2 242    17.8 396.90  9.14 21.6
#> 3 0.02729  0  7.07    0 0.469 7.185 61.1 4.9671   2 242    17.8 392.83  4.03 34.7
#> 4 0.03237  0  2.18    0 0.458 6.998 45.8 6.0622   3 222    18.7 394.63  2.94 33.4
#> 5 0.06905  0  2.18    0 0.458 7.147 54.2 6.0622   3 222    18.7 396.90  5.33 36.2
#> 6 0.02985  0  2.18    0 0.458 6.430 58.7 6.0622   3 222    18.7 394.12  5.21 28.7

The missing values have been injected. Though we know where the missings are, lets quickly check the ‘missings’ pattern usingmice::md.pattern.

# Pattern of missing values
library(mice)
md.pattern(BostonHousing)  # pattern or missing values in data.

#>     crim zn indus chas nox rm age dis tax b lstat medv rad ptratio   
#> 431    1  1     1    1   1  1   1   1   1 1     1    1   1       1  0
#>  35    1  1     1    1   1  1   1   1   1 1     1    1   0       1  1
#>  35    1  1     1    1   1  1   1   1   1 1     1    1   1       0  1
#>   5    1  1     1    1   1  1   1   1   1 1     1    1   0       0  2
#>        0  0     0    0   0  0   0   0   0 0     0    0  40      40 80

There are really four ways you can handle missing values:

1. Deleting the observations

If you have large number of observations in your dataset, where all the classes to be predicted are sufficiently represented in the training data, then try deleting (or not to include missing values while model building, for example by setting na.action=na.omit) those observations (rows) that contain missing values. Make sure after deleting the observations, you have:

1. Have sufficent data points, so the model doesn’t lose power.
2. Not to introduce bias (meaning, disproportionate or non-representation of classes).

# Example
lm(medv ~ ptratio + rad, data=BostonHousing, na.action=na.omit)

2. Deleting the variable

If a particular variable is having more missing values that rest of the variables in the dataset, and, if by removing that one variable you can save many observations. I would, then, suggest to remove that particular variable, unless it is a really important predictor that makes a lot of business sense. It is a matter of deciding between the importance of the variable and losing out on a number of observations.

3. Imputation with mean / median / mode

Replacing the missing values with the mean / median / mode is a crude way of treating missing values. Depending on the context, like if the variation is low or if the variable has low leverage over the response, such a rough approximation is acceptable and could possibly give satisfactory results.

library(Hmisc)
impute(BostonHousing$ptratio, mean)  # replace with mean
impute(BostonHousing$ptratio, median)  # median
impute(BostonHousing$ptratio, 20)  # replace specific number
# or if you want to impute manually
BostonHousing$ptratio[is.na(BostonHousing$ptratio)] <- mean(BostonHousing$ptratio, na.rm = T)  # not run

Lets compute the accuracy when it is imputed with mean

library(DMwR)
actuals <- original$ptratio[is.na(BostonHousing$ptratio)]
predicteds <- rep(mean(BostonHousing$ptratio, na.rm=T), length(actuals))
regr.eval(actuals, predicteds)

#>        mae        mse       rmse       mape 
#> 1.62324034 4.19306071 2.04769644 0.09545664

4. Prediction

Prediction is most advanced method to impute your missing values and includes different approaches such as: kNN Imputation, rpart, and mice.

4.1. kNN Imputation

DMwR::knnImputation uses k-Nearest Neighbours approach to impute missing values. What kNN imputation does in simpler terms is as follows: For every observation to be imputed, it identifies ‘k’ closest observations based on the euclidean distance and computes the weighted average (weighted based on distance) of these ‘k’ obs.

The advantage is that you could impute all the missing values in all variables with one call to the function. It takes the whole data frame as the argument and you don’t even have to specify which variable you want to impute. But be cautious not to include the response variable while imputing, because, when imputing in test/production environment, if your data contains missing values, you won’t be able to use the unknown response variable at that time.

library(DMwR)
knnOutput <- knnImputation(BostonHousing[, !names(BostonHousing) %in% "medv"])  # perform knn imputation.
anyNA(knnOutput)
#> FALSE

Lets compute the accuracy.

actuals <- original$ptratio[is.na(BostonHousing$ptratio)]
predicteds <- knnOutput[is.na(BostonHousing$ptratio), "ptratio"]
regr.eval(actuals, predicteds)
#>        mae        mse       rmse       mape 
#> 1.00188715 1.97910183 1.40680554 0.05859526 

The mean absolute percentage error (mape) has improved by ~ 39% compared to the imputation by mean.
Good.

4.2 rpart

The limitation with DMwR::knnImputation is that it sometimes may not be appropriate to use when the missing value comes from a factor variable. Both rpart and mice has flexibility to handle that scenario. The advantage with rpart is that you just need only one of the variables to be non NA in the predictor fields.

The idea here is we are going to use rpart to predict the missing values instead of kNN. To handle factor variable, we can set themethod=class while calling rpart(). For numeric, we use,method=anova. Here again, we need to make sure not to train rpart on response variable (medv).

library(rpart)
class_mod <- rpart(rad ~ . - medv, data=BostonHousing[!is.na(BostonHousing$rad), ], method="class", na.action=na.omit)  # since rad is a factor
anova_mod <- rpart(ptratio ~ . - medv, data=BostonHousing[!is.na(BostonHousing$ptratio), ], method="anova", na.action=na.omit)  # since ptratio is numeric.
rad_pred <- predict(class_mod, BostonHousing[is.na(BostonHousing$rad), ])
ptratio_pred <- predict(anova_mod, BostonHousing[is.na(BostonHousing$ptratio), ])

Lets compute the accuracy for ptratio

actuals <- original$ptratio[is.na(BostonHousing$ptratio)]
predicteds <- ptratio_pred
regr.eval(actuals, predicteds)
#>        mae        mse       rmse       mape 
#> 0.71061673 0.99693845 0.99846805 0.04099908 

The mean absolute percentage error (mape) has improved additionally by another ~ 30% compared to the knnImputation. Very Good.

Accuracy for rad

actuals <- original$rad[is.na(BostonHousing$rad)]
predicteds <- as.numeric(colnames(rad_pred)[apply(rad_pred, 1, which.max)])
mean(actuals != predicteds)  # compute misclass error.
#> 0.25  

This yields a mis-classification error of 25%. Not bad for a factor variable!

4.3 mice

mice short for Multivariate Imputation by Chained Equations is an R package that provides advanced features for missing value treatment. It uses a slightly uncommon way of implementing the imputation in 2-steps, using mice() to build the model and complete() to generate the completed data. The mice(df) function produces multiple complete copies of df, each with different imputations of the missing data. The complete() function returns one or several of these data sets, with the default being the first. Lets see how to impute ‘rad’ and ‘ptratio’:

library(mice)
miceMod <- mice(BostonHousing[, !names(BostonHousing) %in% "medv"], method="rf")  # perform mice imputation, based on random forests.
miceOutput <- complete(miceMod)  # generate the completed data.
anyNA(miceOutput)
#> FALSE

Lets compute the accuracy of ptratio.

actuals <- original$ptratio[is.na(BostonHousing$ptratio)]
predicteds <- miceOutput[is.na(BostonHousing$ptratio), "ptratio"]
regr.eval(actuals, predicteds)
#>        mae        mse       rmse       mape 
#> 0.36500000 0.78100000 0.88374204 0.02121326

The mean absolute percentage error (mape) has improved additionally by ~ 48% compared to the rpart. Excellent!.

Lets compute the accuracy of rad

actuals <- original$rad[is.na(BostonHousing$rad)]
predicteds <- miceOutput[is.na(BostonHousing$rad), "rad"]
mean(actuals != predicteds)  # compute misclass error.
#> 0.15

The mis-classification error reduced to 15%, which is 6 out of 40 observations. This is a good improvement compared to rpart’s 25%.

If you’d like to dig in deeper, here is the manual or in this other post about mice from DataScience+.

Though we have an idea of how each method performs, there is not enough evidence to conclude which method is better or worse. But these are definitely worth testing out the next time you impute missing values.

If you have any question leave a comment below or contact me inLinkedIn.

    Related Post

    1. R for Publication by Page Piccinini
    2. Assessing significance of slopes in regression models with interaction
    3. First steps with Non-Linear Regression in R
    4. Standard deviation vs Standard error
    5. Introduction to Circular Statistics – Rao’s Spacing Test
    • 0
      点赞
    • 1
      收藏
      觉得还不错? 一键收藏
    • 0
      评论
    ### 回答1: 在R语言中,missing value指的是缺失值,即数据中某些值没有被记录或者无法获取。在数据分析中,缺失值是一个常见的问题,需要进行处理,否则会影响分析结果的准确性和可靠性。常见的处理方法包括删除缺失值、填充缺失值等。在R语言中,可以使用函数is.na()来判断数据是否为缺失值。 ### 回答2: 在R语言中,当数据有缺失值时,我们可以使用不同的方法来处理。其中一个常见的方法是使用is.na()函数来检测数据中的缺失值。 is.na()函数会返回一个与数据相同大小的逻辑向量,向量中的每个元素代表了相应数据是否为缺失值。如果一个元素是缺失值,那么对应的逻辑向量中的元素将为TRUE,否则为FALSE。 例如,如果我们有一个包含缺失值的数据向量x,我们可以使用is.na()函数来查找缺失值的位置。代码如下所示: x <- c(1, 2, NA, 4, NA, 6) missing_values <- is.na(x) print(missing_values) 运行上述代码,输出结果为: [1] FALSE FALSE TRUE FALSE TRUE FALSE 从输出结果中可以看出,第三个和第五个元素是缺失值。我们可以将逻辑向量missing_values与原始数据x进行比较,以获取缺失值所在的位置。 除了使用is.na()函数之外,还可以使用complete.cases()函数来查找数据中的缺失值。complete.cases()函数会返回一个逻辑向量,其中为TRUE的元素表示该数据点不含有缺失值。 例如,如果我们有一个数据框df,我们可以使用complete.cases()函数来确认数据框中的每一行是否都没有缺失值。代码如下所示: df <- data.frame(a = c(1, NA, 2), b = c(3, 4, NA)) complete_rows <- complete.cases(df) print(complete_rows) 运行上述代码,输出结果为: [1] TRUE FALSE FALSE 从输出结果中可以看出,第一行是完整的(不包含缺失值),而第二行和第三行都包含缺失值。我们可以将逻辑向量complete_rows与原始数据框df进行比较,以获取包含缺失值的行。 ### 回答3: 在R语言中,当我们处理数据时,经常会遇到缺失值(missing value)的情况。缺失值是指数据表中某些变量的值没有被记录或者无法获取到。 在R中,我们可以通过多种方法来处理缺失值。其中一个常见的方法是使用is.na()函数来判断一个数据是否为缺失值。is.na()函数会返回一个逻辑向量,如果数据是缺失值,则对应位置的值为TRUE,否则为FALSE。我们可以通过取均值、计算统计量等方法来分析这些缺失值数据。 另一个常用的处理缺失值的方法是使用na.omit()函数。na.omit()函数会删除包含缺失值的行或列。它会返回一个新的数据集,其中不包含缺失值的行或列。这可以用来清除数据集中的缺失值,以便进行后续的数据分析。 此外,我们还可以使用na.rm参数来处理缺失值。在进行统计计算时,可以设置na.rm为TRUE,表示忽略缺失值。这样可以在不删除缺失值的情况下,计算出正确的统计结果。 除了上述方法外,R语言还提供了其他许多处理缺失值的函数和方法,例如fillna()函数可以用指定的值填充缺失值,complete.cases()函数用于判断是否存在缺失值等等。 总之,R语言提供了多种方式来处理缺失值,我们可以根据具体的需求和数据特点选择适当的方法。处理好缺失值可以确保数据分析的准确性和可靠性。

    “相关推荐”对你有帮助么?

    • 非常没帮助
    • 没帮助
    • 一般
    • 有帮助
    • 非常有帮助
    提交
    评论
    添加红包

    请填写红包祝福语或标题

    红包个数最小为10个

    红包金额最低5元

    当前余额3.43前往充值 >
    需支付:10.00
    成就一亿技术人!
    领取后你会自动成为博主和红包主的粉丝 规则
    hope_wisdom
    发出的红包
    实付
    使用余额支付
    点击重新获取
    扫码支付
    钱包余额 0

    抵扣说明:

    1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
    2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

    余额充值