在对多个预测变量和结果变量之间的关系进行建模时,如何判断哪个变量更有意义?

机器算法验证 r 回归 规模效应 重要性
2022-04-09 05:53:47

我面临一个问题,我需要弄清楚两件事:

  • 在几个相关的预测变量中,哪个预测变量在其对预测变量的影响/预测能力方面是最有意义的。
  • 这些不同预测变量的意义顺序(从最有意义到最不重要)。

由于我没有关于这项调查的先验假设,我认为我应该进行某种多元回归分析。然后,也许我应该从模型中提取术语,看看哪一个是最有意义的。我已经知道过去p-value不是正确的方式。那是什么?

例子

假设我想调查哪些因素会影响城市居民的幸福感。我对来自纽约市和旧金山的人(居民)进行抽样,并要求他们评价:

  1. 他们对城市的普遍满意度。
  2. 他们的城市有多干净(在他们看来)
  3. 学校的教育水平有多好
  4. 公共交通有多好。

在我看来,这里有 3 个相关变量(清洁度、教育和交通)可能与总体满意度有关。我想模拟这种关系,并得出结论纽约市与旧金山有多么不同。例如,就整体满意度而言,纽约市的影响是教育 > 交通 > 清洁度,而旧金山交通 > 教育 > 清洁度的影响是什么?

这里有一些玩具数据来演示。

my_df <- structure(list(location = c("sf", "nyc", "nyc", "sf", "nyc", 
                                  "nyc", "nyc", "nyc", "nyc", "sf", "nyc", "sf", "sf", "sf", "nyc", 
                                  "sf", "sf", "nyc", "nyc", "nyc", "sf", "sf", "sf", "sf", "sf", 
                                  "nyc", "sf", "sf", "nyc", "sf", "nyc", "nyc", "nyc", "sf", "nyc", 
                                  "nyc", "nyc", "sf", "nyc", "sf", "sf", "nyc", "nyc", "nyc", "nyc", 
                                  "nyc", "nyc", "nyc", "nyc", "sf", "nyc", "nyc", "sf", "sf", "nyc", 
                                  "nyc", "nyc", "nyc", "sf", "sf", "nyc", "sf", "nyc", "nyc", "sf", 
                                  "nyc", "sf", "sf", "nyc", "nyc", "nyc", "nyc", "sf", "nyc", "nyc", 
                                  "nyc", "sf", "sf", "nyc", "nyc", "nyc", "nyc", "sf", "sf", "nyc", 
                                  "nyc", "nyc", "sf", "sf", "sf", "nyc", "sf", "sf", "sf", "nyc", 
                                  "nyc", "nyc", "nyc", "nyc", "nyc"), 
                     satisfied = c(5, 1, 7, 5, 
                                   7, 1, 5, 5, 5, 7, 7, 4, 1, 3, 5, 6, 7, 7, 6, 4, 4, 5, 6, 5, 5, 
                                   7, 5, 6, 5, 4, 7, 7, 5, 5, 4, 7, 7, 5, 6, 6, 3, 6, 5, 7, 5, 7, 
                                   6, 5, 4, 3, 6, 5, 7, 3, 5, 5, 7, 5, 6, 7, 7, 7, 7, 5, 4, 7, 6, 
                                   7, 7, 6, 6, 6, 5, 7, 5, 4, 6, 4, 7, 5, 6, 6, 5, 5, 6, 7, 6, 5, 
                                   1, 5, 2, 7, 7, 7, 7, 7, 1, 3, 7, 7), 
                     clean = c(4, 1, 
                                         7, 3, 4, 1, 6, 6, 7, 4, 5, 1, 1, 1, 4, 6, 6, 1, 6, 1, 4, 2, 2, 
                                         7, 3, 5, 2, 4, 1, 1, 4, 6, 3, 5, 1, 4, 5, 2, 5, 5, 4, 5, 4, 7, 
                                         3, 6, 5, 4, 5, 4, 5, 4, 5, 1, 5, 2, 6, 5, 7, 6, 3, 7, 5, 6, 4, 
                                         6, 6, 5, 5, 5, 4, 1, 4, 4, 5, 1, 3, 1, 2, 2, 6, 4, 3, 6, 7, 7, 
                                         5, 2, 4, 1, 3, 1, 5, 3, 5, 5, 1, 1, 5, 6), 
                     edu = c(5, 
                                             1, 7, 4, 4, 1, 6, 6, 6, 4, 5, 4, 4, 3, 3, 5, 4, 1, 1, 3, 5, 6, 
                                             5, 5, 3, 6, 2, 4, 4, 4, 6, 3, 4, 7, 1, 4, 7, 5, 6, 5, 5, 5, 4, 
                                             7, 3, 7, 6, 5, 5, 4, 5, 3, 4, 4, 4, 7, 5, 4, 6, 6, 4, 7, 4, 2, 
                                             5, 6, 6, 7, 6, 7, 3, 3, 2, 6, 6, 2, 5, 3, 6, 5, 6, 4, 4, 5, 6, 
                                             7, 3, 3, 4, 5, 4, 1, 3, 4, 4, 6, 5, 1, 4, 6), 
                     transportation = c(1, 
                                              1, 7, 5, 7, 1, 4, 6, 6, 6, 6, 1, 1, 1, 5, 5, 4, 7, 6, 6, 7, 5, 
                                              2, 7, 3, 6, 1, 4, 7, 5, 6, 7, 4, 3, 2, 6, 4, 2, 6, 5, 4, 7, 6, 
                                              7, 3, 7, 4, 4, 5, 4, 6, 3, 5, 2, 7, 3, 7, 7, 7, 6, 7, 7, 7, 5, 
                                              3, 5, 4, 7, 6, 6, 4, 2, 4, 4, 5, 6, 5, 2, 6, 2, 6, 6, 3, 4, 7, 
                                              7, 7, 4, 5, 4, 5, 3, 7, 5, 7, 7, 7, 1, 6, 6)), 
                row.names = c(NA, -100L), 
                class = c("tbl_df", "tbl", "data.frame"))


library(magrittr)
library(effectsize)

my_df %>%
  lm(satisfied ~ clean*location + edu*location + transportation*location, data = .) %>%
  effectsize() %>%
  plot()
#> Warning: It is deprecated to specify `guide = FALSE` to remove a guide. Please
#> use `guide = "none"` instead.

reprex 包于 2021-08-15 创建 (v2.0.0 )

为了在城市之间进行比较,我在每个//变量和变量之间添加edu交互clean然后我过去常常从模型中获取估计值。但我能从这些估计中得出什么结论?transportationlocationeffectsize::effectsize()

如果我完全弄错了,请告知我应该采取什么其他途径来解决这个问题。

谢谢!

4个回答

我相信您正在寻找(相对)变量重要性的指标(另请参阅此线程)。许多可用的方法依赖于分解R2为多元线性回归模型中的每个预测变量分配等级或相对重要性。该家族中的某种方法在“优势分析”一词下更为人所知(参见 Azen 等人,2003 年)。阿森等人。(2003) 还讨论了其他重要性度量,例如基于回归系数的重要性,基于基于系数和相关性组合的重要性相关性。可以在 Grömping (2012) 的论文中找到基于方差分解的技术的一般概述。这些技术在 R 包中实现relaimpodomir并且yhat.

在这里,我将说明一种与模型无关的方法(即它可以应用于各种模型类型)并且具有直观的吸引力:基于排列的可变重要性。这个想法很简单:

  1. 确定一个对您很重要的绩效指标。示例包括:均方根误差 (RMSE)、平均绝对误差 (MAE)、R2等等。这在某种程度上也取决于模型类型。
  2. 计算数据集上的指标,r一世G. 这用作基线性能指标。
  3. 为了一世=1,2,,j:
    (a) 置换预测变量的值X一世在数据集中。
    (b) 重新计算置换数据的度量并调用它per.
    (c) 记录与基线的差异,使用一世p(X一世)=per-r一世G.

重复此操作,例如 1000 次,然后取重要性值的平均值。直观地说,排列打破了预测变量之间的关系X一世和结果。性能指标的变化越大,预测变量的重要性就越高。更多信息可以在Christoph Molnar 的在线书籍的这一章中找到。

Rvip实现了此过程(有关更多信息,请参阅文档(PDF))。以下代码将该想法应用于您的数据集。我选择了R2和平均绝对误差 (MAE) 作为性能指标,并对每个预测变量进行 1000 次置换:

library(vip)
# The model
mod <- lm(satisfied ~ clean*location + edu*location + transportation*location, data = my_df)

# Calculate permutation-based importance with r-squared as metric
set.seed(142857) # For reproducibility
p_r2 <- vip::vi(mod, method = "permute", target = "satisfied", metric = "rsquared", pred_wrapper = predict, nsim = 1000)
p_r2

  Variable       Importance  StDev
  <chr>               <dbl>  <dbl>
1 transportation     0.198  0.0492
2 clean              0.177  0.0465
3 edu                0.0462 0.0237
4 location           0.0449 0.0250

# Calculate permutation-based importance with mae as metric
p_mae <- vip::vi(mod, method = "permute", target = "satisfied", metric = "mae", pred_wrapper = predict, nsim = 1000)
p_mae

  Variable       Importance  StDev
  <chr>               <dbl>  <dbl>
1 transportation     0.166  0.0413
2 clean              0.144  0.0400
3 location           0.0396 0.0214
4 edu                0.0368 0.0219

根据R2,排列transportation导致最大的变化R2,其次是clean使用平均绝对误差显示了类似的排序,transportation其中clean最重要locationedu最不重要。

参考

Azen R, Budescu DV (2003):在多元回归中比较预测变量的优势分析方法。心理学方法8:2, 129-148。链接

Grömping U (2012):基于方差分解的线性回归中相对重要性的估计量。上午统计61:2, 139-147。链接

{rms}根据@JTH 的建议,以下答案是某种无知的尝试使用包。我得说这是我第一次使用这个包,我对我在做什么的了解很少。因此,我要求任何可以 - 请提供反馈!

我已经按照本章中描述的程序进行了操作。

my_df <- structure(list(location = c("sf", "nyc", "nyc", "sf", "nyc", 
                                     "nyc", "nyc", "nyc", "nyc", "sf", "nyc", "sf", "sf", "sf", "nyc", 
                                     "sf", "sf", "nyc", "nyc", "nyc", "sf", "sf", "sf", "sf", "sf", 
                                     "nyc", "sf", "sf", "nyc", "sf", "nyc", "nyc", "nyc", "sf", "nyc", 
                                     "nyc", "nyc", "sf", "nyc", "sf", "sf", "nyc", "nyc", "nyc", "nyc", 
                                     "nyc", "nyc", "nyc", "nyc", "sf", "nyc", "nyc", "sf", "sf", "nyc", 
                                     "nyc", "nyc", "nyc", "sf", "sf", "nyc", "sf", "nyc", "nyc", "sf", 
                                     "nyc", "sf", "sf", "nyc", "nyc", "nyc", "nyc", "sf", "nyc", "nyc", 
                                     "nyc", "sf", "sf", "nyc", "nyc", "nyc", "nyc", "sf", "sf", "nyc", 
                                     "nyc", "nyc", "sf", "sf", "sf", "nyc", "sf", "sf", "sf", "nyc", 
                                     "nyc", "nyc", "nyc", "nyc", "nyc"), 
                        satisfied = c(5, 1, 7, 5, 
                                      7, 1, 5, 5, 5, 7, 7, 4, 1, 3, 5, 6, 7, 7, 6, 4, 4, 5, 6, 5, 5, 
                                      7, 5, 6, 5, 4, 7, 7, 5, 5, 4, 7, 7, 5, 6, 6, 3, 6, 5, 7, 5, 7, 
                                      6, 5, 4, 3, 6, 5, 7, 3, 5, 5, 7, 5, 6, 7, 7, 7, 7, 5, 4, 7, 6, 
                                      7, 7, 6, 6, 6, 5, 7, 5, 4, 6, 4, 7, 5, 6, 6, 5, 5, 6, 7, 6, 5, 
                                      1, 5, 2, 7, 7, 7, 7, 7, 1, 3, 7, 7), 
                        clean = c(4, 1, 
                                  7, 3, 4, 1, 6, 6, 7, 4, 5, 1, 1, 1, 4, 6, 6, 1, 6, 1, 4, 2, 2, 
                                  7, 3, 5, 2, 4, 1, 1, 4, 6, 3, 5, 1, 4, 5, 2, 5, 5, 4, 5, 4, 7, 
                                  3, 6, 5, 4, 5, 4, 5, 4, 5, 1, 5, 2, 6, 5, 7, 6, 3, 7, 5, 6, 4, 
                                  6, 6, 5, 5, 5, 4, 1, 4, 4, 5, 1, 3, 1, 2, 2, 6, 4, 3, 6, 7, 7, 
                                  5, 2, 4, 1, 3, 1, 5, 3, 5, 5, 1, 1, 5, 6), 
                        edu = c(5, 
                                1, 7, 4, 4, 1, 6, 6, 6, 4, 5, 4, 4, 3, 3, 5, 4, 1, 1, 3, 5, 6, 
                                5, 5, 3, 6, 2, 4, 4, 4, 6, 3, 4, 7, 1, 4, 7, 5, 6, 5, 5, 5, 4, 
                                7, 3, 7, 6, 5, 5, 4, 5, 3, 4, 4, 4, 7, 5, 4, 6, 6, 4, 7, 4, 2, 
                                5, 6, 6, 7, 6, 7, 3, 3, 2, 6, 6, 2, 5, 3, 6, 5, 6, 4, 4, 5, 6, 
                                7, 3, 3, 4, 5, 4, 1, 3, 4, 4, 6, 5, 1, 4, 6), 
                        transportation = c(1, 
                                           1, 7, 5, 7, 1, 4, 6, 6, 6, 6, 1, 1, 1, 5, 5, 4, 7, 6, 6, 7, 5, 
                                           2, 7, 3, 6, 1, 4, 7, 5, 6, 7, 4, 3, 2, 6, 4, 2, 6, 5, 4, 7, 6, 
                                           7, 3, 7, 4, 4, 5, 4, 6, 3, 5, 2, 7, 3, 7, 7, 7, 6, 7, 7, 7, 5, 
                                           3, 5, 4, 7, 6, 6, 4, 2, 4, 4, 5, 6, 5, 2, 6, 2, 6, 6, 3, 4, 7, 
                                           7, 7, 4, 5, 4, 5, 3, 7, 5, 7, 7, 7, 1, 6, 6)), 
                   row.names = c(NA, -100L), 
                   class = c("tbl_df", "tbl", "data.frame"))

library(rms, warn.conflicts = FALSE)
#> Loading required package: Hmisc
#> Loading required package: lattice
#> Loading required package: survival
#> Loading required package: Formula
#> Loading required package: ggplot2
#> 
#> Attaching package: 'Hmisc'
#> The following objects are masked from 'package:base':
#> 
#>     format.pval, units
#> Loading required package: SparseM
#> 
#> Attaching package: 'SparseM'
#> The following object is masked from 'package:base':
#> 
#>     backsolve

model_fit <- rms::ols(satisfied ~ clean*location + edu*location + transportation*location, data = my_df)

my_datadist <- rms::datadist(my_df)     ## apparently, we need these two lines 
options(datadist = "my_datadist")  ## otherwise we get an error with `summary(model_fit)`
                                   ## I learned it from here: https://stackoverflow.com/a/41378930/6105259 

plot(summary(model_fit))

reprex 包于 2021-08-16 创建 (v2.0.0 )


据我了解这个输出,我们看到每个预测变量对结果变量的影响,CI 为 95%。因此,例如,我们可以得出结论,具有比具有clean更大的影响satisfiededu

  • 这有意义吗?
  • 最初,我对每个 edu/clean/transportation 和位置之间的交互感兴趣,因为我想了解预测变量之间的关系和城市之间的结果如何变化。但是在这里,据我从这个输出中可以看出,交互并没有反映在satisfied.

更新


在@JTH 的评论之后,我添加了另一个情节:

plot(anova(model_fit))

卡方

这个情节的基础信息在这里:

library(dplyr, warn.conflicts = FALSE)
library(tibble)

anova(model_fit) %>% 
  as_tibble(rownames = "factor") %>%
  mutate(across(3:6, round, 4))
#> # A tibble: 14 x 6
#>    factor                          d.f.     `Partial SS` MS      F       P      
#>    <chr>                           <anov.r> <anov.rms>   <anov.> <anov.> <anov.>
#>  1 "clean  (Factor+Higher Order F~  2        11.8209      5.9105 3.4587  0.0356 
#>  2 " All Interactions"              1         0.0000      0.0000 0.0000  0.9969 
#>  3 "location  (Factor+Higher Orde~  4         4.8955      1.2239 0.7162  0.5830 
#>  4 " All Interactions"              3         4.8821      1.6274 0.9523  0.4188 
#>  5 "edu  (Factor+Higher Order Fac~  2         4.0844      2.0422 1.1951  0.3073 
#>  6 " All Interactions"              1         3.1038      3.1038 1.8163  0.1811 
#>  7 "transportation  (Factor+Highe~  2        15.3207      7.6604 4.4828  0.0139 
#>  8 " All Interactions"              1         0.0476      0.0476 0.0279  0.8678 
#>  9 "clean * location  (Factor+Hig~  1         0.0000      0.0000 0.0000  0.9969 
#> 10 "location * edu  (Factor+Highe~  1         3.1038      3.1038 1.8163  0.1811 
#> 11 "location * transportation  (F~  1         0.0476      0.0476 0.0279  0.8678 
#> 12 "TOTAL INTERACTION"              3         4.8821      1.6274 0.9523  0.4188 
#> 13 "TOTAL"                          7        90.9761     12.9966 7.6055  0.0000 
#> 14 "ERROR"                         92       157.2139      1.7088     NA      NA

更新 2


解决@EdM 的评论,这是一个anova(model_fit)没有将其转换为小标题的打印。

anova(model_fit) %>% 
  round(., 4)
#>                 Analysis of Variance          Response: satisfied 
#> 
#>  Factor                                                   d.f. Partial SS
#>  clean  (Factor+Higher Order Factors)                      2    11.8209  
#>   All Interactions                                         1     0.0000  
#>  location  (Factor+Higher Order Factors)                   4     4.8955  
#>   All Interactions                                         3     4.8821  
#>  edu  (Factor+Higher Order Factors)                        2     4.0844  
#>   All Interactions                                         1     3.1038  
#>  transportation  (Factor+Higher Order Factors)             2    15.3207  
#>   All Interactions                                         1     0.0476  
#>  clean * location  (Factor+Higher Order Factors)           1     0.0000  
#>  location * edu  (Factor+Higher Order Factors)             1     3.1038  
#>  location * transportation  (Factor+Higher Order Factors)  1     0.0476  
#>  TOTAL INTERACTION                                         3     4.8821  
#>  REGRESSION                                                7    90.9761  
#>  ERROR                                                    92   157.2139  
#>  MS      F    P     
#>   5.9105 3.46 0.0356
#>   0.0000 0.00 0.9969
#>   1.2239 0.72 0.5830
#>   1.6274 0.95 0.4188
#>   2.0422 1.20 0.3073
#>   3.1038 1.82 0.1811
#>   7.6604 4.48 0.0139
#>   0.0476 0.03 0.8678
#>   0.0000 0.00 0.9969
#>   3.1038 1.82 0.1811
#>   0.0476 0.03 0.8678
#>   1.6274 0.95 0.4188
#>  12.9966 7.61 <.0001
#>   1.7088

由于您的所有变量都是数字并且限制在相同的值范围内,因此直接比较系数的绝对值(正如您在答案中所做的那样)是估计效应大小的一种可能方法。

然而,对于一个变量,答案可能只在 1-3 范围内,而对于另一个变量,答案可能在 1-5 范围内。如果两个变量同等重要,则第二个变量的系数会更小,因为它的范围更广。为了克服这个问题,您可以计算标准化系数,也就是“beta 系数”。插件包中有用于计算的便利功能,但使用基础 R,您可以在进行模型拟合之前通过标准化数据来实现相同目的:

my_df.scaled <- scale(my_df)

此外,您还应该检查是否所有系数都显着不同于零。

我会看看residuals确定模型的有效性。残差与其他数量的关系图用于发现假设的失败。最常见的图,在简单回归中特别有用,是残差与拟合值的图。零图表明假设没有失败。曲率可能表明拟合的平均函数不正确。似乎随拟合值的平均幅度增加或减少的残差可能表明残差方差不恒定。一些相对较大的残差可能表示异常值,即模型在某种程度上不适合的情况。

线性模型的假设是:

  1. 线性关系:自变量 x 和因变量 y 之间存在线性关系。

  2. 独立性:残差是独立的。特别是,连续残差之间没有相关性。

  3. 同方差性:残差在 x 的每个水平上都有恒定的方差。

  4. 正态性:模型的残差是正态分布的。

如果违反了这些假设中的一个或多个,那么我们的线性回归的结果可能不可靠甚至具有误导性。

使用给定的数据,我构建了一个简单的线性回归模型,如下所示。

# required libraries
library(caret)
library(ggplot2)
library(magrittr)

# split the train dataset into train and test set
set.seed(2021)
index <- createDataPartition(my_df$satisfied, p = 0.7, list = FALSE)
df_train <- my_df[index, ]
df_test  <- my_df[-index, ]

# Model building 
lm_model <- lm(satisfied ~., data = my_df)
# Make predictions and compute the R2, RMSE and MAE
predictions <- lm_model %>% predict(df_test)
data.frame( R2 = R2(predictions, df_test$satisfied),
            RMSE = RMSE(predictions, df_test$satisfied),
            MAE = MAE(predictions, df_test$satisfied))
         R2      RMSE       MAE
1 0.4513587 0.9956456 0.8224509
# Residuals = Observed - Predicted
# compute residuals
residualVals <- df_test$satisfied - predictions
df.1 <- data.frame(df_test$satisfied, predictions, 
                   residualVals)
colnames(df.1)<- c("observed","predicted","residuals")
head(df.1)
  observed predicted   residuals
1        5  4.443428  0.55657221
2        7  6.930498  0.06950225
3        4  3.626674  0.37332568
4        3  3.538958 -0.53895795
5        5  5.083349 -0.08334872
6        7  6.097199  0.90280068
ggplot(data = df.1, aes(x=predicted, y=residuals))+
  geom_point()+
  xlab("Predicted values for satsified")+
  ylab("Residuals")+
  ggtitle("Residual plot")+
  theme_bw()

残差图

讨论

要了解模型的优势/劣势,依赖单一指标是有问题的。模型拟合的可视化,特别是线性回归模型背景下的残差图,对于理解模型是否适合目的至关重要。

当结果是一个数字时,表征模型预测能力的最常用方法是使用均方根误差 (RMSE)。该指标是模型残差的函数,模型残差是观察值减去模型预测。均方误差 (MSE) 是通过对残差求平方并将它们求和来计算的。然后通过取 MSE 的平方根来计算 RMSE,使其与原始数据具有相同的单位。该值通常被解释为残差与零的距离(平均)或观测值与模型预测之间的平均距离。另一个常见的指标是决定系数,通常写为 R2。该值可以解释为模型解释的数据中信息的比例。因此,R2 值为 0。satisfied变化。简单地说,上面的模型不好。应该注意的是,R2 是衡量相关性而非准确性的指标。