a list of papers I have been reading so far. #32
Replies: 12 comments 1 reply
-
|
Thanks @hkim89. Do you have the PDF file for Ormerod, J. T., and Wand, M. P. (2008), “Variational Approximations for Logistic Mixed Models,” that you could share with me via slack? It claims to be a better approach than Jaakkola + Jordan (as I read from their 2012 paper) but I cannot seem to find it .. . |
Beta Was this translation helpful? Give feedback.
-
|
Also,
I'm wondering what's the goal here -- this can potentially handle lots of correlated, fixed effect covariates in the model? Not sure if we ever need to do that in the context of genetic association studies because typically we control for age, sex, a few other things, and a few genotype PC. Not too much covariates to control. Am I understanding it correct? |
Beta Was this translation helpful? Give feedback.
-
|
Jakkola's VA extension: Rijmen, Frank, and Jiří Vomlel. "Assessing the performance of variational methods for mixed logistic regression models." Journal of statistical computation and simulation 78.8 (2008): 765-779. Bayesian penalization: how to use L2 to glmm Tung, Dao Thanh, Minh-Ngoc Tran, and Tran Manh Cuong. "Bayesian adaptive lasso with variational Bayes for variable selection in high-dimensional generalized linear mixed models." Communications in Statistics-Simulation and Computation 48.2 (2019): 530-543. |
Beta Was this translation helpful? Give feedback.
-
|
Another Ormerod paper with his 2012 one I am reading. This would also be good to introduce VA for Yin & people in the meeting. |
Beta Was this translation helpful? Give feedback.
-
|
They have how to incorporate flat priors for covariates or fixed effects to VA for glm/glmm Carbonetto & Stephens 2017 : varbvs: Fast Variable Selection for Large-scale Regression I focus on the papers to make derivation for SuSiE-GLMM this week. |
Beta Was this translation helpful? Give feedback.
-
|
With Carbonetto's papers reading, I found this to understand his flat prior for covariates since his math derivation seems to be intuitive but too brief to understand the detail: Ibrahim, Joseph G., and Purushottam W. Laud. "On Bayesian analysis of generalized linear models using Jeffreys's prior." Journal of the American Statistical Association 86.416 (1991): 981-986. |
Beta Was this translation helpful? Give feedback.
-
|
Found Micheal Jordan's lecture notes for Bayesian modeling and inference to study priors (lecture 5~10), reading to 8 (conjugate priors, Jefferys, reference priors) Seems to be helpful, but will focus on the tutorial tomorrow. |
Beta Was this translation helpful? Give feedback.
-
|
Read Jordan's lecture notes above continuously to 13 (reference priors, nuisance parameters, multivariate regression, g-priors) Rakitsch, Barbara, et al. "It is all in the noise: Efficient multi-task Gaussian process inference with structured residuals." Advances in neural information processing systems 26 (2013) |
Beta Was this translation helpful? Give feedback.
-
|
Read Ormerod et al 2012 in detail (methods ). Will delve into its Appendix about all math details. From time to time look back to Carbonetto's papers (getting better and better) |
Beta Was this translation helpful? Give feedback.
-
|
The reference for Ormerod et al 2012- appendix: the core numerical method Liu, Qing, and Donald A. Pierce. "A note on Gauss—Hermite quadrature." Biometrika 81.3 (1994): 624-629. |
Beta Was this translation helpful? Give feedback.
-
|
I have figured out controlling covariates in Carbonetto's papers. The formulas turns out to be correct, but the model intro in the appendix for logistic regression (Peter Carbonetto, Xiang Zhou, and Matthew Stephens 2017) including typos leads me to thinking a wrong way to go and taking much time to figure out. I will check the rest of the papers and move to susie-glmm to solve for covariates' control. Hopefully this will resolve the susie-glmm's numerical instability. |
Beta Was this translation helpful? Give feedback.
-
|
I wonder if you got any response from Peter. There is one last part to try to figure out: adjusting \xi (variational parameter) in the M-step by covariates which were integrated out in the earlier step. While thinking about it, I move to read Ormerod with the reference (A note on Gauss-Hermite quadrature). |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I have trimmed out a list of papers I have focused on reading so far although there were more than these sorted out.
MLE based variational approximation : to understand whether I can apply to a frequentist approach to glmm as well as to understand how VA is constructed by introducing Variational parameters
approximation. Annals of Statistics, 39, 2502–2532.
Journal of Computational and Graphical Statistics, 21, 2–17.
of the Ninth Iranian Statistics Conference, Department of Statistics, University of Isfahan, Isfahan, Iran, pp. 450–467.
EM with penalized likelihood: to see how to apply penalty terms to EM
Bayesian penalization or related: to think about how to build up penalization to fixed terms in glmm/glmm-susie. More papers relevant to this will be read together
Beta Was this translation helpful? Give feedback.
All reactions