Better modelling and visualisation of newspaper count data

47 篇文章 2 订阅
19 篇文章 1 订阅
(This article was first published on  Quantifying Memory, and kindly contributed to R-bloggers)     

  <!-- Styles for R syntax highlighter       

In this post I outline how count data may be modelled using a negative binomial distribution in order to more accurately present trends in time series count data than using linear methods. I also show how to use ANOVA to identify the point at which one model gains explanatory power, and how confidence intervals may be calculated and plotted around the predicted values. The resulting illustration gives a robust visualisation of how the Beslan Hostage crisis has taken on features of a memory event
Recently I wrote up a piece about quantifying memory and news, and proposed that two distinct linear models might be the way to go about it. However, the problem with linear models is they by their nature don't take into account the ways in which trends may be non-linear.They also lead to nonsense predictions, such as negative values.
Generally, then, linear models should be avoided when mapping count data. What are the alternatives? Typically, a Poisson distribution would be ideal way to capture the probability of a clustering of observations being non-random. A feature of the Poisson distribution is that it assumes the sample mean equals the sample variance; this is very frequently violated when dealing with news data, as a story will have a small number of large values, followed by a large number of small values, resulting in a low mean and a high variance. Instead a negative binomial distribution may be used, which takes a value theta specifying the degree to which the variance and mean are unequal. Estimates provided by a negative binomial model are the same as Poisson estimates, but probability values tend to be more conservative.
One strength of R is its ability to model so called generalised linear models. The negative binomial distribution comes from the package MASS; theta may be calculated using glm.nb:
library(MASS)
fmla <- as.formula("count~date+e_news+a1+a2+elections")
theta <- glm.nb(fmla, data = mdb)$theta
results <- glm(fmla, data = mdb, family = negative.binomial(theta))
The above results may be considered in ANOVA to identify which variables contribute significantly to the model.
anova(results)
Analysis of Deviance Table

Model: Negative Binomial(10.85), link: log

Response: count

Terms added sequentially (first to last)

          Df Deviance Resid. Df Resid. Dev
NULL                         96        994
date       1      731        95        263
e_news     1       69        94        194
a1         1       52        93        142
a2         1        1        92        141
elections  1       12        91        129
Details about the coding of the variables, and the logic behind models contrasting stories as news or memory events may be found here.
From the ANOVA results I can identify which group of variables are contributing the most substantially to describing the data distribution: memory variables, or news variables. As I am interested in distributions where memory effects are apparent, and these develop only over time, I loop through the data deleting the first month's values, until such a time as there is no data left, or the memory variables have greater explanatory power than the news estimator:
mdb2 <- mdb  #copy the data
n <- 0
news <- 0
memory <- 0
while (n == 0) {
    aov3 <- glm(fmla, data = mdb2, family = negative.binomial(glm.nb(fmla, data = mdb2)$theta))
    anova(aov3)
    t <- data.frame(t(anova(aov3)[2]))
    news <- sum(t[, grep("date|news", colnames(t))])
    memory <- sum(t[, grep("a1|a1", colnames(t))])
    if (news > memory) {
        n <- 0
        mdb2 <- mdb2[mdb2$date > min(mdb2$date), ]
    } else (n <- 1)
}
From the negative binomial model predictions and confidence intervals for the period identified as of potential memory significance may be created. The 95% confidence interval either side of the predicted value is calculated by multiplying the standard error by 1.96. I want to plot the whole data, but only predicted values for the period identified as significant, so next I removed predictions for the data up until the period with memory potential. Finally I created a data frame containing the interval for which no estimates were calculated (this will be used to blur out data in ggplot).
estimate <- (predict.glm(aov3, newdata = mdb, type = "response", se.fit = T))
mdb$estimate <- estimate$fit
mdb$se <- estimate$se.fit
mdb$estimate[mdb$date < min(mdb2$date)] <- NA
mdb$se[mdb$date < min(mdb2$date)] <- NA
mdb$date <- as.Date(mdb$date)
mdb$upper <- mdb$estimate + (1.96 * mdb$se)
mdb$lower <- mdb$estimate - (1.96 * mdb$se)
mdb$lower[mdb$lower < 0] <- 0
rect <- data.frame(min(mdb$date) - months(1), min(as.Date(mdb2$date)))
colnames(rect) <- c("one", "two")
In the plot below I visualise the square root of articles about the Beslan hostage tragedy in the Russian press. The square root is chosen to prevent the high initial interest from obscuring the trend that emerged over time. To create the plot I
  • add a ribbon representing the confidence interval
  • plot the observed values
  • add a dotted line representing the fitted values
  • edit the formatting and add a title
  • add a shaded rectangle over the area I wish to ignore:
ggplot(mdb, aes(date, sqrt(count), ymax = sqrt(upper), ymin = sqrt(lower)), 
    environment = environment()) + geom_ribbon(colour = "red", fill = "light grey", 
    alpha = 0.4, linetype = 2) + geom_point(colour = "dark green", size = 3, 
    alpha = 0.8) + geom_line(aes(date, sqrt(estimate))) + theme_bw() + ggtitle(paste0("Regression graph for the Beslan Hostage crisis, exhibiting possible features of memory event since ", 
    as.Date(min(mdb2$date)))) + geom_rect(aes(xmin = rect$one, xmax = rect$two, 
    ymin = -Inf, ymax = +Inf), fill = "light grey", colour = "grey", linetype = 2, 
    alpha = 0.015)
plot of chunk unnamed-chunk-7
anova(aov3)
Analysis of Deviance Table

Model: Negative Binomial(6.449), link: log

Response: count

Terms added sequentially (first to last)

          Df Deviance Resid. Df Resid. Dev
NULL                         70      135.3
date       1    19.18        69      116.1
e_news     1     0.73        68      115.4
a1         1    24.54        67       90.9
a2         1     0.65        66       90.2
elections  1     0.29        65       89.9
Notice in the above table how the anniversaries variables exceed the explanatory power of the news and date variables. This indicates that by the end of 2006 Beslan was increasingly featuring as a memory event and less as a news story. Also notice how the remaining deviance is quite large - this model apparently fits the data less well than the model for the entire data (it explained 85% of the deviance), but this is due to the original estimate being biased by the accurate prediction of a few outliers.
过程建模(process modelling)是指将现实世界中的业务流程、操作流程或系统流程转化为计算机系统可以理解和执行的模型的过程。这些模型利用符号和图形表示实体、活动、控制流和数据流,并定义了与之相关的行为和约束。过程建模有助于我们深入理解和优化现有的业务流程,还可以为系统设计和开发提供一个明确的目标和指导。 模型分析(model analysis)是指对已建立的过程模型进行评估和验证的过程。通过模型分析,我们可以发现和解决模型中的问题、矛盾、冲突等,进而提供更好的流程设计和改进建议。模型分析可以通过模拟、检验、验证等方法进行,帮助我们找出模型中的潜在错误和不足,进而提高流程的效率、可靠性和质量。 在过程建模和模型分析中,常用的工具包括流程图、数据流图、Petri网、状态转换图等。这些工具可以帮助我们把握流程的关键元素和交互关系,以及规范过程模型的建立和分析过程。 过程建模和模型分析对于企业和组织来说非常重要。通过建立模型,我们可以理清业务流程中的逻辑和关系,发现潜在的问题和瓶颈,优化流程并提高效率。模型分析则帮助我们评估模型的正确性和可行性,及时发现和解决问题,从而提高流程的质量和可靠性。 总之,过程建模和模型分析是一种重要的方法和工具,有助于我们理解、优化和改进业务流程,提高组织的效率和竞争力。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值