因果推断dowhy之-ihdp数据集上的案例学习

本文通过IHDP数据集探讨了因果推断问题,利用Python的DoWhy库建立因果模型,识别并估计了‘专家家访’对‘婴儿认知得分’的影响。实验包括线性回归、倾向得分匹配等多种方法,同时进行了假设检验和数据子集验证来反驳估计结果。
摘要由CSDN通过智能技术生成

0x01. 案例背景

IHDP(Infant Health and Development Program)就是一个半合成的典型数据集,用于研究 “专家是否家访” 对 “婴儿日后认知测验得分” 之间的关系。原数据集是基于随机控制实验进行的,因此可以获得因果干预效应的groud truth。为了实现观察性研究数据的数据有偏特点,特意从原数据干预组中有偏向性地去除了一部分数据引入选择偏倚。该数据集共包含747个样本(干预组: 139; 控制组: 608), 共包含25个协变量涉及孩童和其母亲的各项属性。

0x02. 开始实验

0x02_1. 导入要使用的包

# importing required libraries
import dowhy
from dowhy import CausalModel
import pandas as pd
import numpy as np

0x02_2. 读取数据

数据的地址为:IHDP数据集

代码中读取数据会报错,因此我自行将数据整理成csv格式,放在本地读取了。

# 原始数据读取方式,因网络问题读取失败
# data= pd.read_csv("https://raw.githubusercontent.com/AMLab-Amsterdam/CEVAE/master/datasets/IHDP/csv/ihdp_npci_1.csv", header = None)
# 使用整理的本地化数据读取
data = pd.read_csv("data/ihdp.csv", header = None)
data

数据的原始格式如下表:

012345678920212223242526272829
015.5999164.3187803.2682566.854457-0.528603-0.3434551.1285540.161703-0.3166031111000000
106.8758567.8564956.6360597.562718-1.736945-1.8020020.3838282.244320-0.6291891111000000
202.9962736.6339521.5705366.121617-0.807451-0.202946-0.360898-0.8796060.8087061011000000
301.3662065.6972391.2447385.8891250.3900830.596582-1.850350-0.879606-0.0040171011000000
401.9635386.2025821.6850486.191994-1.045229-0.6027100.0114650.1617030.6836721111000000

0x02_3. 数据处理

col =  ["treatment", "y_factual", "y_cfactual", "mu0", "mu1" ,]
for i in range(1,26):
    col.append("x"+str(i))
data.columns = col
data = data.astype({"treatment":'bool'}, copy=False)
data.head()

数据输出结构如图:

treatmenty_factualy_cfactualmu0mu1x1x2x3x4x5x16x17x18x19x20x21x22x23x24x25
0True5.5999164.3187803.2682566.854457-0.528603-0.3434551.1285540.161703-0.3166031111000000
1False6.8758567.8564956.6360597.562718-1.736945-1.8020020.3838282.244320-0.6291891111000000
2False2.9962736.6339521.5705366.121617-0.807451-0.202946-0.360898-0.8796060.8087061011000000
3False1.3662065.6972391.2447385.8891250.3900830.596582-1.850350-0.879606-0.0040171011000000
4False1.9635386.2025821.6850486.191994-1.045229-0.6027100.0114650.1617030.6836721111000000

0x02_4. Model

# Create a causal model from the data and given common causes.
model=CausalModel(
        data = data,
        treatment='treatment',
        outcome='y_factual',
        common_causes=["x"+str(i) for  i in range(1,26)]
        )
model.view_model()
from IPython.display import Image, display
display(Image(filename="causal_model.png"))

输出结果如图所示:
在这里插入图片描述
妈的,这吊图真是画质渣渣。。。图中将 x 1 x_1 x1 x 25 x_{25} x25特征作为共同原因,treatment为数据的treatment,输出为y_factual,共同原因既作用于treatment又作用于y_factual

0x02_5. Identify

#Identify the causal effect
identified_estimand = model.identify_effect(proceed_when_unidentifiable=True, method_name="maximal-adjustment")
print(identified_estimand)

输出结果如下:

Estimand type: EstimandType.NONPARAMETRIC_ATE

### Estimand : 1
Estimand name: backdoor
Estimand expression:
     d                                                                        
────────────(E[y_factual|x10,x18,x1,x25,x15,x4,x12,x9,x11,x13,x6,x17,x5,x20,x2
d[treatment]                                                                  
  
3,x8,x2,x24,x7,x19,x22,x14,x21,x3,x16])

Estimand assumption 1, Unconfoundedness: If U→{treatment} and U→y_factual then P(y_factual|treatment,x10,x18,x1,x25,x15,x4,x12,x9,x11,x13,x6,x17,x5,x20,x23,x8,x2,x24,x7,x19,x22,x14,x21,x3,x16,U) = P(y_factual|treatment,x10,x18,x1,x25,x15,x4,x12,x9,x11,x13,x6,x17,x5,x20,x23,x8,x2,x24,x7,x19,x22,x14,x21,x3,x16)

### Estimand : 2
Estimand name: iv
No such variable(s) found!

### Estimand : 3
Estimand name: frontdoor
No such variable(s) found!

0x02_6. Estimate

0x02_6_1. 使用线性回归

# Estimate the causal effect and compare it with Average Treatment Effect
estimate = model.estimate_effect(identified_estimand,
        method_name="backdoor.linear_regression", test_significance=True
)

print(estimate)

print("Causal Estimate is " + str(estimate.value))
data_1 = data[data["treatment"]==1]
data_0 = data[data["treatment"]==0]

print("ATE", np.mean(data_1["y_factual"])- np.mean(data_0["y_factual"]))

输出结果如下:

*** Causal Estimate ***

## Identified estimand
Estimand type: EstimandType.NONPARAMETRIC_ATE

### Estimand : 1
Estimand name: backdoor
Estimand expression:
     d                                                                        
────────────(E[y_factual|x10,x18,x1,x25,x15,x4,x12,x9,x11,x13,x6,x17,x5,x20,x2
d[treatment]                                                                  
                                       
3,x8,x2,x24,x7,x19,x22,x14,x21,x3,x16])

Estimand assumption 1, Unconfoundedness: If U→{treatment} and U→y_factual then P(y_factual|treatment,x10,x18,x1,x25,x15,x4,x12,x9,x11,x13,x6,x17,x5,x20,x23,x8,x2,x24,x7,x19,x22,x14,x21,x3,x16,U) = P(y_factual|treatment,x10,x18,x1,x25,x15,x4,x12,x9,x11,x13,x6,x17,x5,x20,x23,x8,x2,x24,x7,x19,x22,x14,x21,x3,x16)

## Realized estimand
b: y_factual~treatment+x10+x18+x1+x25+x15+x4+x12+x9+x11+x13+x6+x17+x5+x20+x23+x8+x2+x24+x7+x19+x22+x14+x21+x3+x16
Target units: ate

## Estimate
Mean value: 3.9286717508727156

Causal Estimate is 3.9286717508727156
ATE 4.021121012430829

0x02_6_2. 使用propensity_score_matching Stratification

estimate = model.estimate_effect(identified_estimand,
        method_name="backdoor.propensity_score_matching"
)

print("Causal Estimate is " + str(estimate.value))

print("ATE", np.mean(data_1["y_factual"])- np.mean(data_0["y_factual"]))

输出如下:

Causal Estimate is 3.97913882321704
ATE 4.021121012430829

0x02_6_3. 使用Propensity Score Stratification

estimate = model.estimate_effect(identified_estimand,
        method_name="backdoor.propensity_score_stratification", method_params={'num_strata':50, 'clipping_threshold':5}
)

print("Causal Estimate is " + str(estimate.value))
print("ATE", np.mean(data_1["y_factual"])- np.mean(data_0["y_factual"]))

输出如下:

Causal Estimate is 3.4550471588628207
ATE 4.021121012430829

0x02_6_4. 使用Propensity Score Weighting

estimate = model.estimate_effect(identified_estimand,
        method_name="backdoor.propensity_score_weighting"
)

print("Causal Estimate is " + str(estimate.value))

print("ATE", np.mean(data_1["y_factual"])- np.mean(data_0["y_factual"]))

输出为:

Causal Estimate is 3.9286717508727156
ATE 4.021121012430829

0x02_7. Refute

0x02_7_1. 使用random_common_cause

refute_results=model.refute_estimate(identified_estimand, estimate,
        method_name="random_common_cause")
print(refute_results)

输出结果如下:

Refute: Add a random common cause
Estimated effect:3.4550471588628207
New effect:3.4550471588628215
p value:2.0

0x02_7_2. 使用placebo_treatment_refuter

res_placebo=model.refute_estimate(identified_estimand, estimate,
        method_name="placebo_treatment_refuter")
print(res_placebo)

输出结果如下:

Refute: Use a Placebo Treatment
Estimated effect:3.4550471588628207
New effect:-0.00207657716864257
p value:0.88

0x02_7_3. 使用data_subset_refuter

res_subset=model.refute_estimate(identified_estimand, estimate,
        method_name="data_subset_refuter", subset_fraction=0.9)
print(res_subset)

输出结果如下:

Refute: Use a subset of data
Estimated effect:3.4550471588628207
New effect:3.466444805696942
p value:0.9

0x03. 总结

本案例按照标准DoWhy标准流程进行,实现因果的分析的结果。

  • 4
    点赞
  • 16
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

l8947943

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值