Computational Resources |
It is not possible to estimate how long it will take for a general Markov chain to converge to its stationary distribution. It takes a skilled and thoughtful analysis of the chain to decide whether it has converged to the target distribution and whether the chain is mixing rapidly enough. It is easier, however, to estimate how long a particular simulation might take. The running time of a program that does not have RANDOM statements is roughly linear to the following factors: the number of samples in the input data set (nsamples), the number of simulations (nsim), the number of blocks in the program (nblocks), and the speed of your computer. For an analysis that uses a data set of size nsamples, a simulation length of nsim, and a block design of nblocks, PROC MCMC evaluates the log-likelihood function the following number of times, excluding the tuning phase:
The faster your computer evaluates a single log-likelihood function, the faster this program runs. Suppose that you have nsamples equal to 200, nsim equal to 55,000, and nblocks equal to 3. PROC MCMC evaluates the log-likelihood function approximately times. If your computer can evaluate the log likelihood for one observation times per second, this program takes approximately a half a minute to run. If you want to increase the number of simulations five-fold, the run time increases approximately five-fold.
Each RANDOM statement adds two passes through the input data at each iteration, taking approximately the equvalent computational resource of adding two blocks of parameters.
Of course, larger problems take longer than shorter ones, and if your model is amenable to frequentist treatment, then one of the other SAS procedures might be more suitable. With "regular" likelihoods and a lot of data, the results of standard frequentist analysis are often asymptotically equivalent to a Bayesian approach. If PROC MCMC requires too much CPU time, then perhaps another SAS/STAT tool would be suitable.