我一直在阅读不同的问题,这些问题是关于在使用glmer()
. 一般来说,这个想法是奇点可能来自非常复杂的随机结构。如果随机结构很简单,那么当数据不足以计算方差 - 协方差矩阵时也可能发生这种情况......例如参见Ben Bolker 的这个页面、Robert Long 对这篇文章的回答或 的帮助页面isSingular()
。
但是,我要拟合的模型非常简单:
mod.detection_rand <- glmer(reaction ~ Pedra + (1|Channel), family="binomial", data = garotes)
boundary (singular) fit: see ?isSingular
...显然我有足够的数据用于不同的(固定和随机)预测变量组合:
library(tidyverse)
garotes %>%
group_by(Channel, Pedra) %>%
summarise(n = n())
# A tibble: 16 x 3
# Groups: Channel [8]
Channel Pedra n
<int> <fct> <int>
1 1 No 13
2 1 Yes 13
3 2 No 14
4 2 Yes 12
5 3 No 12
6 3 Yes 14
7 4 No 13
8 4 Yes 13
9 5 No 13
10 5 Yes 13
11 6 No 14
12 6 Yes 12
13 7 No 13
14 7 Yes 13
15 8 No 14
16 8 Yes 12
你怎么看?
编辑:这是模型的摘要,summary(mod.detection_rand)
Generalized linear mixed model fit by maximum likelihood (Laplace Approximation) ['glmerMod']
Family: binomial ( logit )
Formula: reaction ~ Pedra + (1 | Channel)
Data: garotes
AIC BIC logLik deviance df.resid
261.5 271.5 -127.7 255.5 205
Scaled residuals:
Min 1Q Median 3Q Max
-1.8533 -0.9449 0.5396 0.5396 1.0583
Random effects:
Groups Name Variance Std.Dev.
Channel (Intercept) 0 0
Number of obs: 208, groups: Channel, 8
Fixed effects:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -0.1133 0.1946 -0.582 0.56
PedraYes 1.3473 0.3066 4.394 1.11e-05 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Correlation of Fixed Effects:
(Intr)
PedraYes -0.635
convergence code: 0
boundary (singular) fit: see ?isSingular
EDIT2:按照比利的评论:
bobyqa : boundary (singular) fit: see ?isSingular
[OK]
Nelder_Mead : boundary (singular) fit: see ?isSingular
[OK]
nlminbwrap : boundary (singular) fit: see ?isSingular
[OK]
nmkbw : boundary (singular) fit: see ?isSingular
[OK]
optimx.L-BFGS-B : boundary (singular) fit: see ?isSingular
[OK]
nloptwrap.NLOPT_LN_NELDERMEAD : boundary (singular) fit: see ?isSingular
[OK]
nloptwrap.NLOPT_LN_BOBYQA : boundary (singular) fit: see ?isSingular
[OK]
EDIT3:按照伊莎贝拉的回答:
我检查了结果变量 ( reaction
) 的结构。这是结果表:
library(tidyverse)
garotes %>%
group_by(Channel, Pedra, reaction) %>%
summarise(n = n()) %>%
print(n = Inf)
# A tibble: 32 x 4
# Groups: Channel, Pedra [16]
Channel Pedra reaction n
<int> <fct> <int> <int>
1 1 No 0 6
2 1 No 1 7
3 1 Yes 0 3
4 1 Yes 1 10
5 2 No 0 7
6 2 No 1 7
7 2 Yes 0 2
8 2 Yes 1 10
9 3 No 0 8
10 3 No 1 4
11 3 Yes 0 6
12 3 Yes 1 8
13 4 No 0 7
14 4 No 1 6
15 4 Yes 0 3
16 4 Yes 1 10
17 5 No 0 8
18 5 No 1 5
19 5 Yes 0 1
20 5 Yes 1 12
21 6 No 0 6
22 6 No 1 8
23 6 Yes 0 2
24 6 Yes 1 10
25 7 No 0 6
26 7 No 1 7
27 7 Yes 0 2
28 7 Yes 1 11
29 8 No 0 8
30 8 No 1 6
31 8 Yes 0 4
32 8 Yes 1 8
Channels
显然,所有和所有治疗都有两种类型的结果Pedra
......所以它不像伊莎贝拉提出的例子......此外,我试图用 来模拟这个 GLMM,但library(GLMMadaptive)
它也没有收敛。
EDIT4:我正在使用的数据集,以防有人好奇。
Channel Pedra reaction
1 No 1
2 No 0
3 No 0
4 No 0
5 No 0
6 No 1
7 No 0
8 No 0
1 No 1
2 No 1
3 No 1
4 No 1
5 No 0
6 No 0
7 No 0
8 No 0
1 No 0
2 No 1
3 No 0
4 No 0
5 No 0
6 No 0
7 No 0
8 No 1
1 No 0
2 No 1
3 Yes 0
4 Yes 1
5 Yes 1
6 Yes 1
7 Yes 1
8 Yes 0
1 Yes 1
2 Yes 1
3 Yes 0
4 Yes 0
5 No 0
6 No 1
7 Yes 1
8 Yes 1
1 Yes 0
2 Yes 1
3 Yes 1
4 Yes 1
5 Yes 1
6 Yes 0
7 No 1
8 No 1
1 Yes 1
2 Yes 1
3 Yes 1
4 Yes 1
5 Yes 1
6 Yes 1
7 Yes 1
8 Yes 1
1 Yes 1
2 Yes 1
3 Yes 1
4 Yes 1
5 Yes 0
6 Yes 1
7 Yes 1
8 Yes 1
1 Yes 1
2 Yes 1
3 Yes 0
4 Yes 1
5 Yes 1
6 Yes 1
7 Yes 0
8 Yes 0
1 Yes 1
2 Yes 1
3 Yes 0
4 Yes 0
5 Yes 1
6 Yes 1
7 Yes 1
8 Yes 0
1 Yes 1
2 Yes 1
3 Yes 0
4 Yes 1
5 Yes 1
6 Yes 1
7 Yes 0
8 Yes 0
1 Yes 1
2 Yes 0
3 Yes 1
4 Yes 0
5 Yes 1
6 Yes 1
7 Yes 1
8 Yes 1
1 Yes 1
2 Yes 1
3 Yes 0
4 Yes 1
5 Yes 1
6 Yes 0
7 Yes 1
8 Yes 1
1 Yes 1
2 Yes 1
3 Yes 1
4 Yes 1
5 Yes 1
6 Yes 1
7 Yes 1
8 Yes 1
1 Yes 0
2 Yes 0
3 Yes 1
4 Yes 1
5 Yes 1
6 Yes 1
7 Yes 1
8 Yes 1
1 Yes 1
2 No 0
3 Yes 1
4 No 1
5 Yes 1
6 No 1
7 Yes 1
8 No 1
1 No 0
2 Yes 1
3 No 0
4 Yes 1
5 No 1
6 Yes 1
7 No 1
8 Yes 1
1 Yes 0
2 No 1
3 Yes 1
4 No 0
5 Yes 1
6 No 1
7 Yes 1
8 No 0
1 No 0
2 No 1
3 No 1
4 No 0
5 No 1
6 No 0
7 No 0
8 No 0
1 No 1
5 No 0
3 No 1
4 No 1
2 No 1
6 No 0
7 No 1
8 No 0
1 No 0
5 No 0
3 No 0
4 No 0
2 No 1
6 No 0
7 No 0
8 No 0
1 No 1
5 No 1
3 No 1
4 No 0
2 No 0
6 No 1
7 No 1
8 No 0
1 No 1
5 No 0
3 No 0
4 No 1
2 No 0
6 No 1
7 No 1
8 No 1
1 No 1
5 No 1
3 No 0
4 No 1
2 No 0
6 No 1
7 No 1
8 No 1
1 No 1
5 No 1
3 No 0
4 No 0
2 No 0
6 No 1
7 No 0
8 No 0
1 No 0
5 No 0
3 No 0
4 No 1
2 No 0
6 No 0
7 No 1
8 No 1
无论如何,非常感谢您的所有回复!向他们学习很多!