stronginference.com
Burn-in, and Other MCMC Folklore - Strong Inference
http://stronginference.com/burn-in-and-other-mcmc-folklore.html
Burn-in, and Other MCMC Folklore. Sat 09 August 2014. I have been slowly working my way through The Handbook of Markov Chain Monte Carlo. A compiled volume edited by Steve Brooks. That I picked up at last week's Joint Statistical Meetings. The first chapter is a primer on MCMC by Charles Geyer. In particular, he questions the utility of burn-in:. Burn-in is only one method, and not a particuarly good method, for finding a good starting point. This yields MAP estimates for all the parameters in the model,...
stronginference.com
pymc - Strong Inference
http://stronginference.com/tag/pymc.html
Sun 30 November 2014. Calculating Bayes factors with PyMC. Statisticians are sometimes interested in comparing two (or more) models, with respect to their relative support by a particular dataset. This may be in order to select the best model to use for inference, or . Sat 09 August 2014. Burn-in, and Other MCMC Folklore. I have been slowly working my way through The Handbook of Markov Chain Monte Carlo. A compiled volume edited by Steve Brooks. That I picked up at last week's Joint Statistical Meetings .
stronginference.com
Implementing Dirichlet processes for Bayesian semi-parametric models - Strong Inference
http://stronginference.com/implementing-dirichlet-processes-for-bayesian-semi-parametric-models.html
Implementing Dirichlet processes for Bayesian semi-parametric models. Fri 07 March 2014. Use of the term "non-parametric" in the context of Bayesian analysis is something of a misnomer. This is because the first and fundamental step in Bayesian modeling is to specify a. A useful non-parametric approach for modeling random effects is the Dirichlet process. We require two random samples to generate a DP. First, take a draw of values from the baseline distribution:. Theta 1, theta 2, ldots sim G 0 $. G(x) =...
stronginference.com
Statistics - Strong Inference
http://stronginference.com/category/statistics.html
Posts categorized under: Statistics. Sun 30 November 2014. Calculating Bayes factors with PyMC. Statisticians are sometimes interested in comparing two (or more) models, with respect to their relative support by a particular dataset. This may be in order to select the best model to use for inference, or . Sat 09 August 2014. Burn-in, and Other MCMC Folklore. I have been slowly working my way through The Handbook of Markov Chain Monte Carlo. A compiled volume edited by Steve Brooks. Fri 07 March 2014.
stronginference.com
python - Strong Inference
http://stronginference.com/tag/python.html
Sun 30 November 2014. Calculating Bayes factors with PyMC. Statisticians are sometimes interested in comparing two (or more) models, with respect to their relative support by a particular dataset. This may be in order to select the best model to use for inference, or . Sat 09 August 2014. Burn-in, and Other MCMC Folklore. I have been slowly working my way through The Handbook of Markov Chain Monte Carlo. A compiled volume edited by Steve Brooks. That I picked up at last week's Joint Statistical Meetings .
stronginference.com
bayesian - Strong Inference
http://stronginference.com/tag/bayesian.html
Sun 30 November 2014. Calculating Bayes factors with PyMC. Statisticians are sometimes interested in comparing two (or more) models, with respect to their relative support by a particular dataset. This may be in order to select the best model to use for inference, or . Sat 09 August 2014. Burn-in, and Other MCMC Folklore. I have been slowly working my way through The Handbook of Markov Chain Monte Carlo. A compiled volume edited by Steve Brooks. That I picked up at last week's Joint Statistical Meetings .
stronginference.com
Calculating Bayes factors with PyMC - Strong Inference
http://stronginference.com/bayes-factors-pymc.html
Calculating Bayes factors with PyMC. Sun 30 November 2014. Statisticians are sometimes interested in comparing two (or more) models, with respect to their relative support by a particular dataset. This may be in order to select the best model to use for inference, or to weight models so that they can be averaged for use in multimodel inference. Is a good choice when comparing two arbitrary models, and the parameters of those models have been estimated. Bayes factors are simply ratios of. One of the obsta...
stronginference.com
mcmc - Strong Inference
http://stronginference.com/tag/mcmc.html
Sun 30 November 2014. Calculating Bayes factors with PyMC. Statisticians are sometimes interested in comparing two (or more) models, with respect to their relative support by a particular dataset. This may be in order to select the best model to use for inference, or . Sat 09 August 2014. Burn-in, and Other MCMC Folklore. I have been slowly working my way through The Handbook of Markov Chain Monte Carlo. A compiled volume edited by Steve Brooks. That I picked up at last week's Joint Statistical Meetings .
stronginference.com
Automatic Missing Data Imputation with PyMC - Strong Inference
http://stronginference.com/missing-data-imputation.html
Automatic Missing Data Imputation with PyMC. Sun 18 August 2013. Missing data imputation can be done automatically. Types of Missing Data. The appropriate treatment of missing data depends strongly on how the data came to be missing from the dataset. These mechanisms can be broadly classified into three groups, according to how much information and effort is required to deal with them adequately. Missing completely at random (MCAR). Missing at random (MAR). Missing not at random (MNAR). In each of these ...