Skip to content

Latest commit

 

History

History
2645 lines (2389 loc) · 157 KB

File metadata and controls

2645 lines (2389 loc) · 157 KB

Markov Chain Monte Carlo

Markov Chain Monte Carlo: the estimation of posterior probability distributions using a stochastic process (MCMC)

Here we will produce samples from the joint posterior of a model without maximizing anything. Sample directly from the posteior without assuming a Guassian, or any other, shape for it.

Cost: estimation takes much longer, and more work specifying the model. Benefit: not having to assume multivariate normality & ability to directly estimate models (GLM and Multi-level).

8.1. Good King Markov and His island Kingdom

num_weeks <- 1e5
positions <- rep(0, num_weeks)
current <- 10

for (i in 1:num_weeks) {
                                        # record current position
    positions[i] <- current

                                        #flip coin to generate proposal
    proposal <- current + sample(c(-1, 1), size=1)

                                        # make sure he loops around the archipelago
    if (proposal < 1) proposal <- 10
    if (proposal > 10) proposal <- 1

                                        # move?
    prob_move <- proposal / current
    current <- ifelse(runif(1) < prob_move, proposal, currnet)
}

Use the ratio of the proposal island’s population to the current island’s population as the probability of moving.

8.2. Markov Chain Monte Carlo

Metropolis Algorithm > real goal is to draw samples from an unknown and usually complex target distributions, like a posterior probability distribution.

  • The ‘islands’ are parameter values, and they need not be discrete, but an take on a continuous range of values
  • The ‘population sizes’ are the posterior probabilities at each parameter value
  • The ‘weeks’ are the samples taken from the joint posterior of the parameters in the model.

The way we choose our parameter values at each step is symmetric—equal chance of proposing from A to B and from B to A, will give us a collection of samples from the joint posterior.

8.2.1. Gibbs sampling

Metropolis-Hastings also allows for assymetric proposals. This makes it easier to deal with parameters (sd) that have boundaries at zero. Better reason, it allows use to generate proposals that explore the posterior distribution more efficiently.

One of these techniques is Gibbs Sampling. Limitations: Conjugate priors are needed, maybe you don’t want to use those. 2. With large numbers of parameters Gibbs becomes very inefficient.

8.2.2. Hamiltonian Monte Carlo

Less random way often more efficient, but this requires more thought. The objective is to sweep across the the log-posterior (bowl) adjusting speed in proportion to how high up we are.

HMC requires continuous parameters STAN automates much of the model tuning for HMC.

8.3. Easy HMC: map2stan

library(rethinking)
data(rugged)
d <- rugged
d$log_gdp <- log(d$rgdppc_2000)
dd <- d[complete.cases(d$rgdppc_2000), ]

8.3.1. Preparation

dd.trim <- dd[, c("log_gdp", "rugged", "cont_africa")]

8.3.2. Estimation

m8.1stan <- map2stan(
    alist(
        log_gdp ~ dnorm(mu, sigma),
        mu <- a + bR*rugged + bA*cont_africa + bAR*rugged*cont_africa,
        a ~ dnorm(0, 100),
        bR ~ dnorm(0, 10),
        bA ~ dnorm(0, 10),
        bAR ~ dnorm(0, 10),
        sigma ~ dcauchy(0,2)
    ), data=dd.trim)

precis(m8.1stan)
  1. The interval boundaries are HPDI and not PI.
  2. n_eff (estiamte of number of samples) and rhat (estimate of the convergence). The latter should approach 1.

8.3.3. Sampling again, in parallel.

m8.1stan_4chains <- map2stan(m8.1stan, chains=4, cores=4)
precis(m8.1stan_4chains)
#post <- extract.samples(m8.1stan)
#pairs(post)
pairs(m8.1stan)

8.3.5. Using the samples

show(m8.1stan)

8.3.6 Checking the chain

plot(m8.1stan)

8.4. Care and feeding of your Markov Chain

8.4.1. How many samples do you need?

control the number of sampels from the chain using the iter and warmup parameters.

How many samples do we need?

  1. What matters is the effective number of samples, not the raw number.
  2. Depends on what you want to know. If all you want are posterior means, it doesn’t take many samples to get good estimates. For skewed posteriors, you have to think about the region of the distribution that interest you.

Half of the total samples to the warmup.

8.4.2. How many chains do you need?

Chains specific the number of independent Markov chains to sample from. All of the non-warmup samples from each chain will be automatically combined in the resulting inferences.

Three answers

  1. When debugging a model, use a single chain.
  2. When deciding whether the chains are valid, you need more than one chain
  3. When you begin the final run that you’ll make inferences from, you only really need one chain.

four short chains to check, one long chain for inference

Bad chains tend to have conspicious behavior.

8.4.3. Taming a wild chain

y <- c(-1, 1)
m8.2 <- map2stan(alist(
                 y ~ dnorm(mu, sigma),
                 mu <- alpha),
data=list(y=y), start=list(alpha=0, sigma=1),
chains=2, iter=4000, warmup=1000)

plot(m8.2)
m8.3 <- map2stan(
    alist(
        y ~ dnorm(mu, sigma),
        mu <- alpha,
        alpha ~ dnorm(1, 10),
        sigm ~ dcauchy(0, 1)
    ), data=list(y=y), start=list(alpha=0, sigma=1),
    chains=2, iter=4000, warmup=1000)

precis(m8.3)

y <- rnorm(100, mean=0, sd=1)

m8.4 <- map2stan(
    alist(
        y ~ dnorm(mu, sigma),
        mu <- a1 + a2,
        sigma ~ dcauchy(0, 1)
    ), data=list(y=y), start=list(a1=0, a2=0, sigma=1),
    chains=2, iter=4000, warmup=1000)

precis(m8.4)

m8.5 <- map2stan(
    alist(
        y ~ dnorm(mu, sigma),
        mu <- a1 + a2,
        a1 ~ dnorm(0, 10),
        a2 ~ dnorm(0, 10),
        sigma ~ dcauchy(0, 1)
    ), data=list(y=y), start=list(a1=0, a2=0, sigma=1),
    chains=2, iter=4000, warmup=1000)

precis(m8.5)

8.6. Practice

library(rethinking)

8E1

  1. The proposal distribution must be symmetric for a simple Metropolis algorithm

8E2

It does this because it is directed, and it is also more efficient because it uses adaptive proposals

8E3

HMC requires continuous parameters. It cannot use ‘glide’ process to go through parameters when they are discretized.

8E4

The effective is the number of independent samples, meaning samples that are not autocorrelated.

8E5

It should approach 1, not higher.

8E6

8M1

m8m1_unif <- map2stan(
    alist(
        log_gdp ~ dnorm(mu, sigma),
        mu <- a + bR*rugged + bA*cont_africa + bAR*rugged*cont_africa,
        a ~ dnorm(0, 100),
        bR ~ dnorm(0, 10),
        bA ~ dnorm(0, 10),
        bAR ~ dnorm(0, 10),
        sigma ~ dunif(0, 10)
    ), data=dd.trim)


m8m1_exp <- map2stan(
    alist(
        log_gdp ~ dnorm(mu, sigma),
        mu <- a + bR*rugged + bA*cont_africa + bAR*rugged*cont_africa,
        a ~ dnorm(0, 100),
        bR ~ dnorm(0, 10),
        bA ~ dnorm(0, 10),
        bAR ~ dnorm(0, 10),
        sigma ~ dexp(1)
    ), data=dd.trim)
sigma_unif <- extract.samples(m8m1_unif)
sigma_exp <- extract.samples(m8m1_exp)

dens(sigma_unif$sigma, xlab="sigma", xlim=c(0.5,1.5), col="red")
dens(sigma_exp$sigma, add=TRUE, col="blue")

The posteriors are very similar between uniform and exponential prior.

m8m1_cauchy10 <- map2stan(
    alist(
        log_gdp ~ dnorm(mu, sigma),
        mu <- a + bR*rugged + bA*cont_africa + bAR*rugged*cont_africa,
        a ~ dnorm(0, 100),
        bR ~ dnorm(0, 10),
        bA ~ dnorm(0, 10),
        bAR ~ dnorm(0, 10),
        sigma ~ dcauchy(0, 10)
    ), data=dd.trim)


m8m1_cauchy5 <- map2stan(
    alist(
        log_gdp ~ dnorm(mu, sigma),
        mu <- a + bR*rugged + bA*cont_africa + bAR*rugged*cont_africa,
        a ~ dnorm(0, 100),
        bR ~ dnorm(0, 10),
        bA ~ dnorm(0, 10),
        bAR ~ dnorm(0, 10),
        sigma ~ dcauchy(0, 5)
    ), data=dd.trim)


m8m1_cauchy1 <- map2stan(
    alist(
        log_gdp ~ dnorm(mu, sigma),
        mu <- a + bR*rugged + bA*cont_africa + bAR*rugged*cont_africa,
        a ~ dnorm(0, 100),
        bR ~ dnorm(0, 10),
        bA ~ dnorm(0, 10),
        bAR ~ dnorm(0, 10),
        sigma ~ dcauchy(0, 1)
    ), data=dd.trim)


m8m1_cauchy01 <- map2stan(
    alist(
        log_gdp ~ dnorm(mu, sigma),
        mu <- a + bR*rugged + bA*cont_africa + bAR*rugged*cont_africa,
        a ~ dnorm(0, 100),
        bR ~ dnorm(0, 10),
        bA ~ dnorm(0, 10),
        bAR ~ dnorm(0, 10),
        sigma ~ dcauchy(0, .1)
    ), data=dd.trim)


sigma_cauchy10 <- extract.samples(m8m1_cauchy10)
sigma_cauchy5 <- extract.samples(m8m1_cauchy5)
sigma_cauchy1 <- extract.samples(m8m1_cauchy1)
sigma_cauchy01 <- extract.samples(m8m1_cauchy01)

dens(sigma_cauchy10$sigma, xlab="sigma", xlim=c(0.5,1.5), col="red")
dens(sigma_cauchy5$sigma, add=TRUE, col="blue")
dens(sigma_cauchy1$sigma, add=TRUE, col="green")
dens(sigma_cauchy01$sigma, add=TRUE, col="black")

m8m1_dexp10 <- map2stan(
    alist(
        log_gdp ~ dnorm(mu, sigma),
        mu <- a + bR*rugged + bA*cont_africa + bAR*rugged*cont_africa,
        a ~ dnorm(0, 100),
        bR ~ dnorm(0, 10),
        bA ~ dnorm(0, 10),
        bAR ~ dnorm(0, 10),
        sigma ~ dexp(10)
    ), data=dd.trim)


m8m1_dexp5 <- map2stan(
    alist(
        log_gdp ~ dnorm(mu, sigma),
        mu <- a + bR*rugged + bA*cont_africa + bAR*rugged*cont_africa,
        a ~ dnorm(0, 100),
        bR ~ dnorm(0, 10),
        bA ~ dnorm(0, 10),
        bAR ~ dnorm(0, 10),
        sigma ~ dexp(5)
    ), data=dd.trim)


m8m1_dexp1 <- map2stan(
    alist(
        log_gdp ~ dnorm(mu, sigma),
        mu <- a + bR*rugged + bA*cont_africa + bAR*rugged*cont_africa,
        a ~ dnorm(0, 100),
        bR ~ dnorm(0, 10),
        bA ~ dnorm(0, 10),
        bAR ~ dnorm(0, 10),
        sigma ~ dexp(1)
    ), data=dd.trim)


m8m1_dexp01 <- map2stan(
    alist(
        log_gdp ~ dnorm(mu, sigma),
        mu <- a + bR*rugged + bA*cont_africa + bAR*rugged*cont_africa,
        a ~ dnorm(0, 100),
        bR ~ dnorm(0, 10),
        bA ~ dnorm(0, 10),
        bAR ~ dnorm(0, 10),
        sigma ~ dexp(.1)
    ), data=dd.trim)


sigma_dexp10 <- extract.samples(m8m1_dexp10)
sigma_dexp5 <- extract.samples(m8m1_dexp5)
sigma_dexp1 <- extract.samples(m8m1_dexp1)
sigma_dexp01 <- extract.samples(m8m1_dexp01)

dens(sigma_dexp10$sigma, xlab="sigma", xlim=c(0.7,1.2), col="red")
dens(sigma_dexp5$sigma, add=TRUE, col="blue")
dens(sigma_dexp1$sigma, add=TRUE, col="green")
dens(sigma_dexp01$sigma, add=TRUE, col="black")

The priors became more assymetric

8M3

m <- map2stan(
  alist(
    log_gdp ~ dnorm( mu , sigma ) ,
    mu <- a + bR*rugged + bA*cont_africa + bAR*rugged*cont_africa ,
    a ~ dnorm(0,100),
    bR ~ dnorm(0,10),
    bA ~ dnorm(0,10),
    bAR ~ dnorm(0,10),
    sigma ~ dcauchy(0,2)
), data=dd.trim )


m.warmup1 <- map2stan(m, chains=4, cores=4, warmup=1, iter=1000)
m.warmup5 <- map2stan(m, chains=4, cores=4, warmup=5, iter=1000)
m.warmup10 <- map2stan(m, chains=4, cores=4, warmup=10, iter=1000)
m.warmup20 <- map2stan(m, chain=4, cores=4, warmup=20, iter=1000)
m.warmup50 <- map2stan(m, chains=4, cores=4, warmup=50, iter=1000)
m.warmup100 <- map2stan(m, chains=4, cores=4, warmup=100, iter=1000)
m.warmup200 <- map2stan(m, chains=4, cores=4, warmup=200, iter=1000)
m.warmup1000 <- map2stan(m, chains=4, cores=4, warmup=1000, iter=1000)

precis(m.warmup1)

Compare the n_eff values. At 50 they begin the rise and rhat goes to 1.

: Adjust your expectations accordingly!
Chain Chain 32: : Adjust your expectations accordingly!

Chain 2Chain : 3
: 
Chain 3: 
Chain 4Chain : 2
: WARNING: No variance estimation is
Chain Chain 2Chain 3: : 1         performed for num_warmup < 20WARNING: No variance estimation is
: 
Chain Chain Iteration:   1 / 1000 [  0%]  (Warmup)23: 
: 
         performed for num_warmup < 20
Chain 3Chain : 
4: Gradient evaluation took 9.4e-05 seconds
Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.94 seconds.
Chain 4: Adjust your expectations accordingly!
Chain 4: 
Chain 4: 
Chain 4: WARNING: No variance estimation is
Chain 4:          performed for num_warmup < 20
Chain 4: 
Chain 1: Iteration:   2 / 1000 [  0%]  (Sampling)
Chain 2: Iteration:   1 / 1000 [  0%]  (Warmup)
Chain 3: Iteration:   1 / 1000 [  0%]  (Warmup)
Chain 3: Iteration:   2 / 1000 [  0%]  (Sampling)
Chain 2: Iteration:   2 / 1000 [  0%]  (Sampling)
Chain 4: Iteration:   1 / 1000 [  0%]  (Warmup)
Chain 4: Iteration:   2 / 1000 [  0%]  (Sampling)
Chain 3: Iteration: 101 / 1000 [ 10%]  (Sampling)
Chain 2: Iteration: 101 / 1000 [ 10%]  (Sampling)
Chain 1: Iteration: 101 / 1000 [ 10%]  (Sampling)
Chain 4: Iteration: 101 / 1000 [ 10%]  (Sampling)
Chain 2: Iteration: 201 / 1000 [ 20%]  (Sampling)
Chain 3: Iteration: 201 / 1000 [ 20%]  (Sampling)
Chain 1: Iteration: 201 / 1000 [ 20%]  (Sampling)
Chain 4: Iteration: 201 / 1000 [ 20%]  (Sampling)
Chain 3: Iteration: 301 / 1000 [ 30%]  (Sampling)
Chain 2: Iteration: 301 / 1000 [ 30%]  (Sampling)
Chain 1: Iteration: 301 / 1000 [ 30%]  (Sampling)
Chain 4: Iteration: 301 / 1000 [ 30%]  (Sampling)
Chain 3: Iteration: 401 / 1000 [ 40%]  (Sampling)
Chain 2: Iteration: 401 / 1000 [ 40%]  (Sampling)
Chain 1: Iteration: 401 / 1000 [ 40%]  (Sampling)
Chain 4: Iteration: 401 / 1000 [ 40%]  (Sampling)
Chain 3: Iteration: 501 / 1000 [ 50%]  (Sampling)
Chain 2: Iteration: 501 / 1000 [ 50%]  (Sampling)
Chain 4: Iteration: 501 / 1000 [ 50%]  (Sampling)
Chain 1: Iteration: 501 / 1000 [ 50%]  (Sampling)
Chain 3: Iteration: 601 / 1000 [ 60%]  (Sampling)
Chain 2: Iteration: 601 / 1000 [ 60%]  (Sampling)
Chain 4: Iteration: 601 / 1000 [ 60%]  (Sampling)
Chain 1: Iteration: 601 / 1000 [ 60%]  (Sampling)
Chain 3: Iteration: 701 / 1000 [ 70%]  (Sampling)
Chain 4: Iteration: 701 / 1000 [ 70%]  (Sampling)
Chain 2: Iteration: 701 / 1000 [ 70%]  (Sampling)
Chain 1: Iteration: 701 / 1000 [ 70%]  (Sampling)
Chain 4: Iteration: 801 / 1000 [ 80%]  (Sampling)
Chain 3: Iteration: 801 / 1000 [ 80%]  (Sampling)
Chain 1: Iteration: 801 / 1000 [ 80%]  (Sampling)
Chain 2: Iteration: 801 / 1000 [ 80%]  (Sampling)
Chain 4: Iteration: 901 / 1000 [ 90%]  (Sampling)
Chain 3: Iteration: 901 / 1000 [ 90%]  (Sampling)
Chain 1: Iteration: 901 / 1000 [ 90%]  (Sampling)
Chain 2: Iteration: 901 / 1000 [ 90%]  (Sampling)
Chain 4: Iteration: 1000 / 1000 [100%]  (Sampling)
Chain 4: 
Chain 4:  Elapsed Time: 0.000103 seconds (Warm-up)
Chain 4:                0.049223 seconds (Sampling)
Chain 4:                0.049326 seconds (Total)
Chain 4: 
Chain 3: Iteration: 1000 / 1000 [100%]  (Sampling)
Chain 3: 
Chain 3:  Elapsed Time: 8.6e-05 seconds (Warm-up)
Chain 3:                0.050535 seconds (Sampling)
Chain 3:                0.050621 seconds (Total)
Chain 3: 
Chain 1: Iteration: 1000 / 1000 [100%]  (Sampling)
Chain 1: 
Chain 1:  Elapsed Time: 0.000187 seconds (Warm-up)
Chain 1:                0.051483 seconds (Sampling)
Chain 1:                0.05167 seconds (Total)
Chain 1: 
Chain 2: Iteration: 1000 / 1000 [100%]  (Sampling)
Chain 2: 
Chain 2:  Elapsed Time: 0.00034 seconds (Warm-up)
Chain 2:                0.051351 seconds (Sampling)
Chain 2:                0.051691 seconds (Total)
Chain 2: 

SAMPLING FOR MODEL 'log_gdp ~ dnorm(mu, sigma)' NOW (CHAIN 1).
Chain 1: 
Chain 1: Gradient evaluation took 0.00029 seconds
Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 2.9 seconds.
Chain 1: Adjust your expectations accordingly!
Chain 1: 
Chain 1: 
Chain 1: WARNING: No variance estimation is
Chain 1:          performed for num_warmup < 20
Chain 1: 
Chain 1: Iteration: 1 / 1 [100%]  (Sampling)
Chain 1: 
Chain 1:  Elapsed Time: 2e-06 seconds (Warm-up)
Chain 1:                7.1e-05 seconds (Sampling)
Chain 1:                7.3e-05 seconds (Total)
Chain 1: 
Computing WAIC
Constructing posterior predictions
[ 399 / 3996 ]
[ 798 / 3996 ]
[ 1197 / 3996 ]
[ 1596 / 3996 ]
[ 1995 / 3996 ]
[ 2394 / 3996 ]
[ 2793 / 3996 ]
[ 3192 / 3996 ]
[ 3591 / 3996 ]
[ 3990 / 3996 ]
[ 3996 / 3996 ]
Warning messages:
1: There were 3996 divergent transitions after warmup. Increasing adapt_delta above 0.8 may help. See
http://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup 
2: Examine the pairs() plot to diagnose sampling problems
 
3: There were 1 divergent transitions after warmup. Increasing adapt_delta above 0.8 may help. See
http://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup 
4: Examine the pairs() plot to diagnose sampling problems
 
5: In map2stan(m, chains = 4, cores = 4, warmup = 1, iter = 1000) :
  There were 3996 divergent iterations during sampling.
Check the chains (trace plots, n_eff, Rhat) carefully to ensure they are valid.

SAMPLING FOR MODEL 'log_gdp ~ dnorm(mu, sigma)' NOW (CHAIN 1).

SAMPLING FOR MODEL 'log_gdp ~ dnorm(mu, sigma)' NOW (CHAIN 
SAMPLING FOR MODEL '3log_gdp ~ dnorm(mu, sigma)).
' NOW (CHAIN 4).
Chain 1: 
Chain 1: Gradient evaluation took 9e-05 seconds
Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.9 seconds.
Chain 1: Adjust your expectations accordingly!
Chain 1: 
Chain 1: 
Chain 1: WARNING: No variance estimation is
Chain 1:          performed for num_warmup < 20
Chain 1: 

SAMPLING FOR MODEL 'log_gdp ~ dnorm(mu, sigma)' NOW (CHAIN 2).
Chain 1: Iteration:   1 / 1000 [  0%]  (Warmup)
Chain 3: 
Chain 3: Gradient evaluation took 8.3e-05 seconds
Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.83 seconds.
Chain 3: Adjust your expectations accordingly!
Chain 3: 
Chain 3: 
Chain 3: WARNING: No variance estimation is
Chain 3:          performed for num_warmup < 20
Chain 3: 
Chain 4: 
Chain 2: 
Chain 4: Gradient evaluation took 0.000116 seconds
Chain 2: Chain Gradient evaluation took 7.8e-05 seconds4
: Chain 1000 transitions using 10 leapfrog steps per transition would take 1.16 seconds.2
: Chain 41000 transitions using 10 leapfrog steps per transition would take 0.78 seconds.: 
Adjust your expectations accordingly!Chain 
2: Chain Adjust your expectations accordingly!
4Chain : 2
: 
Chain Chain 42: : 

Chain 2Chain : WARNING: No variance estimation is4
: Chain WARNING: No variance estimation is2
:          performed for num_warmup < 20Chain 
Chain 42: :     
     performed for num_warmup < 20Chain 
1: Chain 4: 
Chain Iteration:   6 / 1000 [  0%]  (Sampling)
3: Iteration:   1 / 1000 [  0%]  (Warmup)
Chain 2: Iteration:   1 / 1000 [  0%]  (Warmup)
Chain 3: Iteration:   6 / 1000 [  0%]  (Sampling)
Chain 4: Iteration:   1 / 1000 [  0%]  (Warmup)
Chain 2: Iteration:   6 / 1000 [  0%]  (Sampling)
Chain 4: Iteration:   6 / 1000 [  0%]  (Sampling)
Chain 3: Iteration: 105 / 1000 [ 10%]  (Sampling)
Chain 2: Iteration: 105 / 1000 [ 10%]  (Sampling)
Chain 4: Iteration: 105 / 1000 [ 10%]  (Sampling)
Chain 3: Iteration: 205 / 1000 [ 20%]  (Sampling)
Chain 1: Iteration: 105 / 1000 [ 10%]  (Sampling)
Chain 2: Iteration: 205 / 1000 [ 20%]  (Sampling)
Chain 4: Iteration: 205 / 1000 [ 20%]  (Sampling)
Chain 3: Iteration: 305 / 1000 [ 30%]  (Sampling)
Chain 2: Iteration: 305 / 1000 [ 30%]  (Sampling)
Chain 4: Iteration: 305 / 1000 [ 30%]  (Sampling)
Chain 1: Iteration: 205 / 1000 [ 20%]  (Sampling)
Chain 3: Iteration: 405 / 1000 [ 40%]  (Sampling)
Chain 2: Iteration: 405 / 1000 [ 40%]  (Sampling)
Chain 4: Iteration: 405 / 1000 [ 40%]  (Sampling)
Chain 3: Iteration: 505 / 1000 [ 50%]  (Sampling)
Chain 2: Iteration: 505 / 1000 [ 50%]  (Sampling)
Chain 1: Iteration: 305 / 1000 [ 30%]  (Sampling)
Chain 4: Iteration: 505 / 1000 [ 50%]  (Sampling)
Chain 3: Iteration: 605 / 1000 [ 60%]  (Sampling)
Chain 2: Iteration: 605 / 1000 [ 60%]  (Sampling)
Chain 4: Iteration: 605 / 1000 [ 60%]  (Sampling)
Chain 1: Iteration: 405 / 1000 [ 40%]  (Sampling)
Chain 3: Iteration: 705 / 1000 [ 70%]  (Sampling)
Chain 2: Iteration: 705 / 1000 [ 70%]  (Sampling)
Chain 4: Iteration: 705 / 1000 [ 70%]  (Sampling)
Chain 3: Iteration: 805 / 1000 [ 80%]  (Sampling)
Chain 1: Iteration: 505 / 1000 [ 50%]  (Sampling)
Chain 2: Iteration: 805 / 1000 [ 80%]  (Sampling)
Chain 4: Iteration: 805 / 1000 [ 80%]  (Sampling)
Chain 3: Iteration: 905 / 1000 [ 90%]  (Sampling)
Chain 2: Iteration: 905 / 1000 [ 90%]  (Sampling)
Chain 1: Iteration: 605 / 1000 [ 60%]  (Sampling)
Chain 4: Iteration: 905 / 1000 [ 90%]  (Sampling)Chain 
3: Iteration: 1000 / 1000 [100%]  (Sampling)
Chain 3: 
Chain 3:  Elapsed Time: 0.000532 seconds (Warm-up)
Chain 3:                0.051362 seconds (Sampling)
Chain 3:                0.051894 seconds (Total)
Chain 3: 
Chain 2: Iteration: 1000 / 1000 [100%]  (Sampling)
Chain 2: 
Chain 2:  Elapsed Time: 0.000908 seconds (Warm-up)
Chain 2:                0.053571 seconds (Sampling)
Chain 2:                0.054479 seconds (Total)
Chain 2: 
Chain 4: Iteration: 1000 / 1000 [100%]  (Sampling)
Chain 4: 
Chain 4:  Elapsed Time: 0.00055 seconds (Warm-up)
Chain 4:                0.05527 seconds (Sampling)
Chain 4:                0.05582 seconds (Total)
Chain 4: 
Chain 1: Iteration: 705 / 1000 [ 70%]  (Sampling)
Chain 1: Iteration: 805 / 1000 [ 80%]  (Sampling)
Chain 1: Iteration: 905 / 1000 [ 90%]  (Sampling)
Chain 1: Iteration: 1000 / 1000 [100%]  (Sampling)
Chain 1: 
Chain 1:  Elapsed Time: 0.001159 seconds (Warm-up)
Chain 1:                0.08015 seconds (Sampling)
Chain 1:                0.081309 seconds (Total)
Chain 1: 

SAMPLING FOR MODEL 'log_gdp ~ dnorm(mu, sigma)' NOW (CHAIN 1).
Chain 1: 
Chain 1: Gradient evaluation took 6.5e-05 seconds
Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.65 seconds.
Chain 1: Adjust your expectations accordingly!
Chain 1: 
Chain 1: 
Chain 1: WARNING: No variance estimation is
Chain 1:          performed for num_warmup < 20
Chain 1: 
Chain 1: Iteration: 1 / 1 [100%]  (Sampling)
Chain 1: 
Chain 1:  Elapsed Time: 2e-06 seconds (Warm-up)
Chain 1:                0.000185 seconds (Sampling)
Chain 1:                0.000187 seconds (Total)
Chain 1: 
Computing WAIC
Constructing posterior predictions
[ 398 / 3980 ]
[ 796 / 3980 ]
[ 1194 / 3980 ]
[ 1592 / 3980 ]
[ 1990 / 3980 ]
[ 2388 / 3980 ]
[ 2786 / 3980 ]
[ 3184 / 3980 ]
[ 3582 / 3980 ]
[ 3980 / 3980 ]
Warning messages:
1: There were 3386 divergent transitions after warmup. Increasing adapt_delta above 0.8 may help. See
http://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup 
2: There were 1 chains where the estimated Bayesian Fraction of Missing Information was low. See
http://mc-stan.org/misc/warnings.html#bfmi-low 
3: Examine the pairs() plot to diagnose sampling problems
 
4: There were 1 divergent transitions after warmup. Increasing adapt_delta above 0.8 may help. See
http://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup 
5: Examine the pairs() plot to diagnose sampling problems
 
6: In map2stan(m, chains = 4, cores = 4, warmup = 5, iter = 1000) :
  There were 3386 divergent iterations during sampling.
Check the chains (trace plots, n_eff, Rhat) carefully to ensure they are valid.

SAMPLING FOR MODEL 'log_gdp ~ dnorm(mu, sigma)' NOW (CHAIN 1).

SAMPLING FOR MODEL 'log_gdp ~ dnorm(mu, sigma)' NOW (CHAIN 2).

SAMPLING FOR MODEL 'log_gdp ~ dnorm(mu, sigma)' NOW (CHAIN 4).

SAMPLING FOR MODEL 'log_gdp ~ dnorm(mu, sigma)' NOW (CHAIN 3).
Chain 1: 
Chain 1: Gradient evaluation took 9.4e-05 seconds
Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.94 seconds.
Chain 1: Adjust your expectations accordingly!
Chain 1: 
Chain 1: 
Chain 1: WARNING: No variance estimation is
Chain 1:          performed for num_warmup < 20
Chain 1: 
Chain 2: 
Chain 2: Gradient evaluation took 9.1e-05 seconds
Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.91 seconds.
Chain 2: Adjust your expectations accordingly!
Chain 2: 
Chain 2: 
Chain 2: WARNING: No variance estimation is
Chain 2:          performed for num_warmup < 20
Chain 2: 
Chain 1: Iteration:   1 / 1000 [  0%]  (Warmup)
Chain 4: 
Chain 4: Gradient evaluation took 9.9e-05 seconds
Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.99 seconds.
Chain 4: Adjust your expectations accordingly!
Chain 4: 
Chain 4: 
Chain 4: WARNING: No variance estimation is
Chain 4:          performed for num_warmup < 20
Chain 4: 
Chain 3: 
Chain 3: Gradient evaluation took 0.000113 seconds
Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 1.13 seconds.
Chain 3: Adjust your expectations accordingly!
Chain 3: 
Chain 3: 
Chain 3: WARNING: No variance estimation is
Chain 3:          performed for num_warmup < 20
Chain Chain 34: : 
Iteration:   1 / 1000 [  0%]  (Warmup)
Chain 2: Iteration:   1 / 1000 [  0%]  (Warmup)
Chain 3: Iteration:   1 / 1000 [  0%]  (Warmup)
Chain 4: Iteration:  11 / 1000 [  1%]  (Sampling)
Chain 1: Iteration:  11 / 1000 [  1%]  (Sampling)
Chain 2: Iteration:  11 / 1000 [  1%]  (Sampling)
Chain 3: Iteration:  11 / 1000 [  1%]  (Sampling)
Chain 2: Iteration: 110 / 1000 [ 11%]  (Sampling)
Chain 2: Iteration: 210 / 1000 [ 21%]  (Sampling)
Chain 2: Iteration: 310 / 1000 [ 31%]  (Sampling)
Chain 2: Iteration: 410 / 1000 [ 41%]  (Sampling)
Chain 2: Iteration: 510 / 1000 [ 51%]  (Sampling)
Chain 2: Iteration: 610 / 1000 [ 61%]  (Sampling)
Chain 2: Iteration: 710 / 1000 [ 71%]  (Sampling)
Chain 2: Iteration: 810 / 1000 [ 81%]  (Sampling)
Chain 3: Iteration: 110 / 1000 [ 11%]  (Sampling)
Chain 2: Iteration: 910 / 1000 [ 91%]  (Sampling)
Chain 2: Iteration: 1000 / 1000 [100%]  (Sampling)
Chain 2: 
Chain 2:  Elapsed Time: 0.013876 seconds (Warm-up)
Chain 2:                0.045112 seconds (Sampling)
Chain 2:                0.058988 seconds (Total)
Chain 2: 
Chain 3: Iteration: 210 / 1000 [ 21%]  (Sampling)
Chain 3: Iteration: 310 / 1000 [ 31%]  (Sampling)
Chain 1: Iteration: 110 / 1000 [ 11%]  (Sampling)
Chain 3: Iteration: 410 / 1000 [ 41%]  (Sampling)
Chain 3: Iteration: 510 / 1000 [ 51%]  (Sampling)
Chain 4: Iteration: 110 / 1000 [ 11%]  (Sampling)
Chain 1: Iteration: 210 / 1000 [ 21%]  (Sampling)
Chain 3: Iteration: 610 / 1000 [ 61%]  (Sampling)
Chain 3: Iteration: 710 / 1000 [ 71%]  (Sampling)
Chain 3: Iteration: 810 / 1000 [ 81%]  (Sampling)
Chain 1: Iteration: 310 / 1000 [ 31%]  (Sampling)
Chain 4: Iteration: 210 / 1000 [ 21%]  (Sampling)
Chain 3: Iteration: 910 / 1000 [ 91%]  (Sampling)
Chain 3: Iteration: 1000 / 1000 [100%]  (Sampling)
Chain 3: 
Chain 3:  Elapsed Time: 0.01751 seconds (Warm-up)
Chain 3:                0.228766 seconds (Sampling)
Chain 3:                0.246276 seconds (Total)
Chain 3: 
Chain 1: Iteration: 410 / 1000 [ 41%]  (Sampling)
Chain 4: Iteration: 310 / 1000 [ 31%]  (Sampling)
Chain 1: Iteration: 510 / 1000 [ 51%]  (Sampling)
Chain 4: Iteration: 410 / 1000 [ 41%]  (Sampling)
Chain 1: Iteration: 610 / 1000 [ 61%]  (Sampling)
Chain 4: Iteration: 510 / 1000 [ 51%]  (Sampling)
Chain 1: Iteration: 710 / 1000 [ 71%]  (Sampling)
Chain 4: Iteration: 610 / 1000 [ 61%]  (Sampling)
Chain 1: Iteration: 810 / 1000 [ 81%]  (Sampling)
Chain 4: Iteration: 710 / 1000 [ 71%]  (Sampling)
Chain 1: Iteration: 910 / 1000 [ 91%]  (Sampling)
Chain 4: Iteration: 810 / 1000 [ 81%]  (Sampling)
Chain 1: Iteration: 1000 / 1000 [100%]  (Sampling)
Chain 1: 
Chain 1:  Elapsed Time: 0.003251 seconds (Warm-up)
Chain 1:                0.601762 seconds (Sampling)
Chain 1:                0.605013 seconds (Total)
Chain 1: 
Chain 4: Iteration: 910 / 1000 [ 91%]  (Sampling)
Chain 4: Iteration: 1000 / 1000 [100%]  (Sampling)
Chain 4: 
Chain 4:  Elapsed Time: 0.001466 seconds (Warm-up)
Chain 4:                0.677012 seconds (Sampling)
Chain 4:                0.678478 seconds (Total)
Chain 4: 

SAMPLING FOR MODEL 'log_gdp ~ dnorm(mu, sigma)' NOW (CHAIN 1).
Chain 1: 
Chain 1: Gradient evaluation took 0.000812 seconds
Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 8.12 seconds.
Chain 1: Adjust your expectations accordingly!
Chain 1: 
Chain 1: 
Chain 1: WARNING: No variance estimation is
Chain 1:          performed for num_warmup < 20
Chain 1: 
Chain 1: Iteration: 1 / 1 [100%]  (Sampling)
Chain 1: 
Chain 1:  Elapsed Time: 1e-06 seconds (Warm-up)
Chain 1:                7.8e-05 seconds (Sampling)
Chain 1:                7.9e-05 seconds (Total)
Chain 1: 
Computing WAIC
Constructing posterior predictions
[ 396 / 3960 ]
[ 792 / 3960 ]
[ 1188 / 3960 ]
[ 1584 / 3960 ]
[ 1980 / 3960 ]
[ 2376 / 3960 ]
[ 2772 / 3960 ]
[ 3168 / 3960 ]
[ 3564 / 3960 ]
[ 3960 / 3960 ]
Warning messages:
1: There were 990 divergent transitions after warmup. Increasing adapt_delta above 0.8 may help. See
http://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup 
2: There were 1 transitions after warmup that exceeded the maximum treedepth. Increase max_treedepth above 10. See
http://mc-stan.org/misc/warnings.html#maximum-treedepth-exceeded 
3: There were 3 chains where the estimated Bayesian Fraction of Missing Information was low. See
http://mc-stan.org/misc/warnings.html#bfmi-low 
4: Examine the pairs() plot to diagnose sampling problems
 
5: There were 1 divergent transitions after warmup. Increasing adapt_delta above 0.8 may help. See
http://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup 
6: Examine the pairs() plot to diagnose sampling problems
 
7: In map2stan(m, chains = 4, cores = 4, warmup = 10, iter = 1000) :
  There were 990 divergent iterations during sampling.
Check the chains (trace plots, n_eff, Rhat) carefully to ensure they are valid.

SAMPLING FOR MODEL 'log_gdp ~ dnorm(mu, sigma)' NOW (CHAIN 1
).
SAMPLING FOR MODEL 'log_gdp ~ dnorm(mu, sigma)' NOW (CHAIN 2).

SAMPLING FOR MODEL 'log_gdp ~ dnorm(mu, sigma)' NOW (CHAIN 4).

SAMPLING FOR MODEL 'log_gdp ~ dnorm(mu, sigma)' NOW (CHAIN 3).
Chain 1: 
Chain 1: Gradient evaluation took 0.000119 seconds
Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 1.19 seconds.
Chain 1: Adjust your expectations accordingly!
Chain 1: 
Chain 1: 
Chain 1: WARNING: There aren't enough warmup iterations to fit the
Chain 1:          three stages of adaptation as currently configured.
Chain 1Chain :          Reducing each adaptation stage to 15%/75%/10% of4
: 
Chain 1:          the given number of warmup iterations:
Chain 1:            init_buffer = 7
Chain Chain 14: :            adapt_window = 38
Gradient evaluation took 0.000131 seconds
Chain 1Chain : 4           term_buffer = 5: 
1000 transitions using 10 leapfrog steps per transition would take 1.31 seconds.Chain 
1Chain : 4
: Adjust your expectations accordingly!
Chain 4: 
Chain 4: Chain 
2: 
Chain 2: Chain 4Gradient evaluation took 8.2e-05 seconds: 
WARNING: There aren't enough warmup iterations to fit the
Chain 2Chain : 4:          three stages of adaptation as currently configured.
Chain 4:          Reducing each adaptation stage to 15%/75%/10% of
Chain 4:          the given number of warmup iterations:1000 transitions using 10 leapfrog steps per transition would take 0.82 seconds.
Chain 
4: Chain            init_buffer = 72
: Adjust your expectations accordingly!Chain 
4Chain : 2:            adapt_window = 38

Chain 2Chain : 
4:            term_buffer = 5
Chain 4: 
Chain 2: WARNING: There aren't enough warmup iterations to fit the
Chain 2:          three stages of adaptation as currently configured.
Chain 2:          Reducing each adaptation stage to 15%/75%/10% of
Chain 2: Chain          the given number of warmup iterations:
1: Chain 2Iteration:   1 / 1000 [  0%]  (Warmup): 
           init_buffer = 7
Chain 2:            adapt_window = 38
Chain 2:            term_buffer = 5
Chain 2: 
Chain 3: 
Chain 3: Gradient evaluation took 7.8e-05 seconds
Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.78 seconds.Chain Chain 
42Chain : : 3Iteration:   1 / 1000 [  0%]  (Warmup)Iteration:   1 / 1000 [  0%]  (Warmup): 

Adjust your expectations accordingly!
Chain 3: 
Chain 3: 
Chain 3: WARNING: There aren't enough warmup iterations to fit the
Chain 3:          three stages of adaptation as currently configured.
Chain 3:          Reducing each adaptation stage to 15%/75%/10% of
Chain 3:          the given number of warmup iterations:
Chain 3:            init_buffer = 7
Chain 3:            adapt_window = 38
Chain 3:            term_buffer = 5
Chain 3: 
Chain 3: Iteration:   1 / 1000 [  0%]  (Warmup)
Chain 2: Iteration:  51 / 1000 [  5%]  (Sampling)
Chain 3: Iteration:  51 / 1000 [  5%]  (Sampling)
Chain 1: Iteration:  51 / 1000 [  5%]  (Sampling)
Chain 2: Iteration: 150 / 1000 [ 15%]  (Sampling)
Chain 3: Iteration: 150 / 1000 [ 15%]  (Sampling)
Chain 4: Iteration:  51 / 1000 [  5%]  (Sampling)
Chain 2: Iteration: 250 / 1000 [ 25%]  (Sampling)
Chain 1: Iteration: 150 / 1000 [ 15%]  (Sampling)
Chain 3: Iteration: 250 / 1000 [ 25%]  (Sampling)
Chain 4: Iteration: 150 / 1000 [ 15%]  (Sampling)
Chain 1: Iteration: 250 / 1000 [ 25%]  (Sampling)
Chain 2: Iteration: 350 / 1000 [ 35%]  (Sampling)
Chain 3: Iteration: 350 / 1000 [ 35%]  (Sampling)
Chain 4: Iteration: 250 / 1000 [ 25%]  (Sampling)
Chain 1: Iteration: 350 / 1000 [ 35%]  (Sampling)
Chain 2: Iteration: 450 / 1000 [ 45%]  (Sampling)
Chain 4: Iteration: 350 / 1000 [ 35%]  (Sampling)
Chain 3: Iteration: 450 / 1000 [ 45%]  (Sampling)
Chain 1: Iteration: 450 / 1000 [ 45%]  (Sampling)
Chain 2: Iteration: 550 / 1000 [ 55%]  (Sampling)
Chain 4: Iteration: 450 / 1000 [ 45%]  (Sampling)
Chain 3: Iteration: 550 / 1000 [ 55%]  (Sampling)
Chain 1: Iteration: 550 / 1000 [ 55%]  (Sampling)
Chain 2: Iteration: 650 / 1000 [ 65%]  (Sampling)
Chain 4: Iteration: 550 / 1000 [ 55%]  (Sampling)
Chain 3: Iteration: 650 / 1000 [ 65%]  (Sampling)
Chain 1: Iteration: 650 / 1000 [ 65%]  (Sampling)
Chain 2: Iteration: 750 / 1000 [ 75%]  (Sampling)
Chain 4: Iteration: 650 / 1000 [ 65%]  (Sampling)
Chain 3: Iteration: 750 / 1000 [ 75%]  (Sampling)
Chain 1: Iteration: 750 / 1000 [ 75%]  (Sampling)
Chain 4: Iteration: 750 / 1000 [ 75%]  (Sampling)
Chain 2: Iteration: 850 / 1000 [ 85%]  (Sampling)
Chain 1: Iteration: 850 / 1000 [ 85%]  (Sampling)
Chain 3: Iteration: 850 / 1000 [ 85%]  (Sampling)
Chain 4: Iteration: 850 / 1000 [ 85%]  (Sampling)
Chain 2: Iteration: 950 / 1000 [ 95%]  (Sampling)
Chain 1: Iteration: 950 / 1000 [ 95%]  (Sampling)
Chain 3: Iteration: 950 / 1000 [ 95%]  (Sampling)
Chain 2: Iteration: 1000 / 1000 [100%]  (Sampling)
Chain 2: 
Chain 2:  Elapsed Time: 0.021067 seconds (Warm-up)
Chain 2:                0.242226 seconds (Sampling)
Chain 2:                0.263293 seconds (Total)
Chain 2: 
Chain 1: Iteration: 1000 / 1000 [100%]  (Sampling)
Chain 1: 
Chain 1:  Elapsed Time: 0.047291 seconds (Warm-up)
Chain 1:                0.221 seconds (Sampling)
Chain 1:                0.268291 seconds (Total)
Chain 1: 
Chain 4: Iteration: 950 / 1000 [ 95%]  (Sampling)
Chain 3: Iteration: 1000 / 1000 [100%]  (Sampling)
Chain 3: 
Chain 3:  Elapsed Time: 0.020908 seconds (Warm-up)
Chain 3:                0.257042 seconds (Sampling)
Chain 3:                0.27795 seconds (Total)
Chain 3: 
Chain 4: Iteration: 1000 / 1000 [100%]  (Sampling)
Chain 4: 
Chain 4:  Elapsed Time: 0.057722 seconds (Warm-up)
Chain 4:                0.222342 seconds (Sampling)
Chain 4:                0.280064 seconds (Total)
Chain 4: 

SAMPLING FOR MODEL 'log_gdp ~ dnorm(mu, sigma)' NOW (CHAIN 1).
Chain 1: 
Chain 1: Gradient evaluation took 0.000757 seconds
Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 7.57 seconds.
Chain 1: Adjust your expectations accordingly!
Chain 1: 
Chain 1: 
Chain 1: WARNING: No variance estimation is
Chain 1:          performed for num_warmup < 20
Chain 1: 
Chain 1: Iteration: 1 / 1 [100%]  (Sampling)
Chain 1: 
Chain 1:  Elapsed Time: 2e-06 seconds (Warm-up)
Chain 1:                6.6e-05 seconds (Sampling)
Chain 1:                6.8e-05 seconds (Total)
Chain 1: 
Computing WAIC
Constructing posterior predictions
[ 380 / 3800 ]
[ 760 / 3800 ]
[ 1140 / 3800 ]
[ 1520 / 3800 ]
[ 1900 / 3800 ]
[ 2280 / 3800 ]
[ 2660 / 3800 ]
[ 3040 / 3800 ]
[ 3420 / 3800 ]
[ 3800 / 3800 ]
Warning messages:
1: There were 1 divergent transitions after warmup. Increasing adapt_delta above 0.8 may help. See
http://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup 
2: Examine the pairs() plot to diagnose sampling problems

SAMPLING FOR MODEL 'log_gdp ~ dnorm(mu, sigma)' NOW (CHAIN 1).

SAMPLING FOR MODEL 'log_gdp ~ dnorm(mu, sigma)' NOW (CHAIN 2).

SAMPLING FOR MODEL 'log_gdp ~ dnorm(mu, sigma)' NOW (CHAIN 4).

SAMPLING FOR MODEL 'log_gdp ~ dnorm(mu, sigma)' NOW (CHAIN 3).
Chain 2: 
Chain 2: Gradient evaluation took 0.000134 seconds
Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 1.34 seconds.
Chain 2: Adjust your expectations accordingly!
Chain 2: 
Chain 2: 
Chain Chain 21: : WARNING: There aren't enough warmup iterations to fit the

Chain 2:          three stages of adaptation as currently configured.
Chain 2:          Reducing each adaptation stage to 15%/75%/10% of
Chain 2:          the given number of warmup iterations:
Chain 2:            init_buffer = 15
Chain 2:            adapt_window = 75Chain 
Chain 12: :            term_buffer = 10Gradient evaluation took 0.000155 seconds

Chain 2: 
Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 1.55 seconds.
Chain 1Chain : Adjust your expectations accordingly!4
: 
Chain 1: 
Chain 1: 
Chain 4: Gradient evaluation took 0.000112 seconds
Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 1.12 seconds.
Chain 4Chain : Adjust your expectations accordingly!1
: Chain WARNING: There aren't enough warmup iterations to fit the4
: Chain 
1Chain : 4         three stages of adaptation as currently configured.: 

Chain 1:          Reducing each adaptation stage to 15%/75%/10% of
Chain Chain Chain 124: : :          the given number of warmup iterations:WARNING: There aren't enough warmup iterations to fit the

Chain Iteration:   1 / 1000 [  0%]  (Warmup)
3Chain : 1
:            init_buffer = 15
Chain Chain 41: :          three stages of adaptation as currently configured.
           adapt_window = 75
Chain 4Chain : Chain 3         Reducing each adaptation stage to 15%/75%/10% of1: 
: Chain            term_buffer = 104Gradient evaluation took 9.3e-05 seconds: 

         the given number of warmup iterations:Chain 
Chain 1Chain 3: 4: 
:            init_buffer = 151000 transitions using 10 leapfrog steps per transition would take 0.93 seconds.

Chain Chain 43: :            adapt_window = 75Adjust your expectations accordingly!

Chain Chain 34: : 
           term_buffer = 10Chain 
3Chain : 4
: 
Chain 3: WARNING: There aren't enough warmup iterations to fit the
Chain 3:          three stages of adaptation as currently configured.
Chain 3:          Reducing each adaptation stage to 15%/75%/10% ofChain Chain 
41: : Chain Iteration:   1 / 1000 [  0%]  (Warmup)3
Iteration:   1 / 1000 [  0%]  (Warmup)
:          the given number of warmup iterations:
Chain 3:            init_buffer = 15
Chain 3:            adapt_window = 75
Chain 3:            term_buffer = 10
Chain 3: 
Chain 3: Iteration:   1 / 1000 [  0%]  (Warmup)
Chain 2: Iteration: 100 / 1000 [ 10%]  (Warmup)
Chain 4: Iteration: 100 / 1000 [ 10%]  (Warmup)
Chain 2: Iteration: 101 / 1000 [ 10%]  (Sampling)
Chain 4: Iteration: 101 / 1000 [ 10%]  (Sampling)
Chain 3: Iteration: 100 / 1000 [ 10%]  (Warmup)
Chain 3: Iteration: 101 / 1000 [ 10%]  (Sampling)
Chain 2: Iteration: 200 / 1000 [ 20%]  (Sampling)
Chain 1: Iteration: 100 / 1000 [ 10%]  (Warmup)Chain 
4: Iteration: 200 / 1000 [ 20%]  (Sampling)
Chain 1: Iteration: 101 / 1000 [ 10%]  (Sampling)
Chain 3: Iteration: 200 / 1000 [ 20%]  (Sampling)
Chain 4: Iteration: 300 / 1000 [ 30%]  (Sampling)
Chain 2: Iteration: 300 / 1000 [ 30%]  (Sampling)
Chain 1: Iteration: 200 / 1000 [ 20%]  (Sampling)
Chain 3: Iteration: 300 / 1000 [ 30%]  (Sampling)
Chain 2: Iteration: 400 / 1000 [ 40%]  (Sampling)
Chain 4: Iteration: 400 / 1000 [ 40%]  (Sampling)
Chain 1: Iteration: 300 / 1000 [ 30%]  (Sampling)
Chain 3: Iteration: 400 / 1000 [ 40%]  (Sampling)
Chain 2: Iteration: 500 / 1000 [ 50%]  (Sampling)
Chain 4: Iteration: 500 / 1000 [ 50%]  (Sampling)
Chain 1: Iteration: 400 / 1000 [ 40%]  (Sampling)
Chain 3: Iteration: 500 / 1000 [ 50%]  (Sampling)
Chain 2: Iteration: 600 / 1000 [ 60%]  (Sampling)
Chain 4: Iteration: 600 / 1000 [ 60%]  (Sampling)
Chain 1: Iteration: 500 / 1000 [ 50%]  (Sampling)
Chain 3: Iteration: 600 / 1000 [ 60%]  (Sampling)
Chain 4: Iteration: 700 / 1000 [ 70%]  (Sampling)
Chain 2: Iteration: 700 / 1000 [ 70%]  (Sampling)
Chain 1: Iteration: 600 / 1000 [ 60%]  (Sampling)
Chain 3: Iteration: 700 / 1000 [ 70%]  (Sampling)
Chain 4: Iteration: 800 / 1000 [ 80%]  (Sampling)
Chain 2: Iteration: 800 / 1000 [ 80%]  (Sampling)
Chain 1: Iteration: 700 / 1000 [ 70%]  (Sampling)
Chain 4: Iteration: 900 / 1000 [ 90%]  (Sampling)
Chain 3: Iteration: 800 / 1000 [ 80%]  (Sampling)
Chain 2: Iteration: 900 / 1000 [ 90%]  (Sampling)
Chain 1: Iteration: 800 / 1000 [ 80%]  (Sampling)
Chain 3: Iteration: 900 / 1000 [ 90%]  (Sampling)
Chain 4: Iteration: 1000 / 1000 [100%]  (Sampling)
Chain 4: 
Chain 4:  Elapsed Time: 0.04071 seconds (Warm-up)
Chain 4:                0.218312 seconds (Sampling)
Chain 4:                0.259022 seconds (Total)
Chain 4: 
Chain 2: Iteration: 1000 / 1000 [100%]  (Sampling)
Chain 2: 
Chain 2:  Elapsed Time: 0.042726 seconds (Warm-up)
Chain 2:                0.220972 seconds (Sampling)
Chain 2:                0.263698 seconds (Total)
Chain 2: 
Chain 1: Iteration: 900 / 1000 [ 90%]  (Sampling)
Chain 3: Iteration: 1000 / 1000 [100%]  (Sampling)
Chain 3: 
Chain 3:  Elapsed Time: 0.052381 seconds (Warm-up)
Chain 3:                0.230176 seconds (Sampling)
Chain 3:                0.282557 seconds (Total)
Chain 3: 
Chain 1: Iteration: 1000 / 1000 [100%]  (Sampling)
Chain 1: 
Chain 1:  Elapsed Time: 0.065918 seconds (Warm-up)
Chain 1:                0.23167 seconds (Sampling)
Chain 1:                0.297588 seconds (Total)
Chain 1: 

SAMPLING FOR MODEL 'log_gdp ~ dnorm(mu, sigma)' NOW (CHAIN 1).
Chain 1: 
Chain 1: Gradient evaluation took 0.000705 seconds
Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 7.05 seconds.
Chain 1: Adjust your expectations accordingly!
Chain 1: 
Chain 1: 
Chain 1: WARNING: No variance estimation is
Chain 1:          performed for num_warmup < 20
Chain 1: 
Chain 1: Iteration: 1 / 1 [100%]  (Sampling)
Chain 1: 
Chain 1:  Elapsed Time: 2e-06 seconds (Warm-up)
Chain 1:                7.3e-05 seconds (Sampling)
Chain 1:                7.5e-05 seconds (Total)
Chain 1: 
Computing WAIC
Constructing posterior predictions
[ 360 / 3600 ]
[ 720 / 3600 ]
[ 1080 / 3600 ]
[ 1440 / 3600 ]
[ 1800 / 3600 ]
[ 2160 / 3600 ]
[ 2520 / 3600 ]
[ 2880 / 3600 ]
[ 3240 / 3600 ]
[ 3600 / 3600 ]
Warning messages:
1: There were 1 divergent transitions after warmup. Increasing adapt_delta above 0.8 may help. See
http://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup 
2: Examine the pairs() plot to diagnose sampling problems

SAMPLING FOR MODEL 'log_gdp ~ dnorm(mu, sigma)' NOW (CHAIN 1).

SAMPLING FOR MODEL 'log_gdp ~ dnorm(mu, sigma)' NOW (CHAIN 3).

SAMPLING FOR MODEL 'log_gdp ~ dnorm(mu, sigma)' NOW (CHAIN 2).
Chain 1: 
Chain 1: Gradient evaluation took 0.000118 seconds
Chain Chain 31: : 
1000 transitions using 10 leapfrog steps per transition would take 1.18 seconds.
Chain 1: Adjust your expectations accordingly!
Chain 1: 
Chain Chain 31: : Gradient evaluation took 0.000108 seconds

Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 1.08 seconds.
Chain 3: Adjust your expectations accordingly!
Chain 3: 
Chain 3: 
Chain 2: 
Chain 2: Gradient evaluation took 0.000107 seconds
Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 1.07 seconds.
Chain 2: Adjust your expectations accordingly!
Chain 2: 
Chain 2: 

SAMPLING FOR MODEL 'log_gdp ~ dnorm(mu, sigma)' NOW (CHAIN 4).
Chain 3: Iteration:   1 / 1000 [  0%]  (Warmup)
Chain 1: Iteration:   1 / 1000 [  0%]  (Warmup)
Chain 2: Iteration:   1 / 1000 [  0%]  (Warmup)
Chain 4: 
Chain 4: Gradient evaluation took 8.2e-05 seconds
Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.82 seconds.
Chain 4: Adjust your expectations accordingly!
Chain 4: 
Chain 4: 
Chain 4: Iteration:   1 / 1000 [  0%]  (Warmup)
Chain 3: Iteration: 100 / 1000 [ 10%]  (Warmup)
Chain 2: Iteration: 100 / 1000 [ 10%]  (Warmup)
Chain 3: Iteration: 200 / 1000 [ 20%]  (Warmup)
Chain 1: Iteration: 100 / 1000 [ 10%]  (Warmup)
Chain 3: Iteration: 201 / 1000 [ 20%]  (Sampling)
Chain 2: Iteration: 200 / 1000 [ 20%]  (Warmup)
Chain 2: Iteration: 201 / 1000 [ 20%]  (Sampling)
Chain 4: Iteration: 100 / 1000 [ 10%]  (Warmup)
Chain 3: Iteration: 300 / 1000 [ 30%]  (Sampling)
Chain 2: Iteration: 300 / 1000 [ 30%]  (Sampling)
Chain 1: Iteration: 200 / 1000 [ 20%]  (Warmup)
Chain 1: Iteration: 201 / 1000 [ 20%]  (Sampling)
Chain 4: Iteration: 200 / 1000 [ 20%]  (Warmup)
Chain 4: Iteration: 201 / 1000 [ 20%]  (Sampling)
Chain 2: Iteration: 400 / 1000 [ 40%]  (Sampling)
Chain 3: Iteration: 400 / 1000 [ 40%]  (Sampling)
Chain 1: Iteration: 300 / 1000 [ 30%]  (Sampling)
Chain 4: Iteration: 300 / 1000 [ 30%]  (Sampling)
Chain 2: Iteration: 500 / 1000 [ 50%]  (Sampling)
Chain 3: Iteration: 500 / 1000 [ 50%]  (Sampling)
Chain 1: Iteration: 400 / 1000 [ 40%]  (Sampling)
Chain 4: Iteration: 400 / 1000 [ 40%]  (Sampling)
Chain 3: Iteration: 600 / 1000 [ 60%]  (Sampling)
Chain 2: Iteration: 600 / 1000 [ 60%]  (Sampling)
Chain 1: Iteration: 500 / 1000 [ 50%]  (Sampling)
Chain 4: Iteration: 500 / 1000 [ 50%]  (Sampling)
Chain 3: Iteration: 700 / 1000 [ 70%]  (Sampling)
Chain 2: Iteration: 700 / 1000 [ 70%]  (Sampling)
Chain 1: Iteration: 600 / 1000 [ 60%]  (Sampling)
Chain 4: Iteration: 600 / 1000 [ 60%]  (Sampling)
Chain 3: Iteration: 800 / 1000 [ 80%]  (Sampling)
Chain 2: Iteration: 800 / 1000 [ 80%]  (Sampling)
Chain 1: Iteration: 700 / 1000 [ 70%]  (Sampling)
Chain 4: Iteration: 700 / 1000 [ 70%]  (Sampling)
Chain 3: Iteration: 900 / 1000 [ 90%]  (Sampling)
Chain 2: Iteration: 900 / 1000 [ 90%]  (Sampling)
Chain 1: Iteration: 800 / 1000 [ 80%]  (Sampling)
Chain 4: Iteration: 800 / 1000 [ 80%]  (Sampling)
Chain 2: Iteration: 1000 / 1000 [100%]  (Sampling)
Chain 2: 
Chain 2:  Elapsed Time: 0.060654 seconds (Warm-up)
Chain 2:                0.214667 seconds (Sampling)
Chain 2:                0.275321 seconds (Total)
Chain 2: 
Chain 3: Iteration: 1000 / 1000 [100%]  (Sampling)
Chain 3: 
Chain 3:  Elapsed Time: 0.058267 seconds (Warm-up)
Chain 3:                0.220595 seconds (Sampling)
Chain 3:                0.278862 seconds (Total)
Chain 3: 
Chain 4: Iteration: 900 / 1000 [ 90%]  (Sampling)
Chain 1: Iteration: 900 / 1000 [ 90%]  (Sampling)
Chain 1: Iteration: 1000 / 1000 [100%]  (Sampling)
Chain 1: 
Chain 1:  Elapsed Time: 0.094021 seconds (Warm-up)
Chain 1:                0.22299 seconds (Sampling)
Chain 1:                0.317011 seconds (Total)
Chain 1: 
Chain 4: Iteration: 1000 / 1000 [100%]  (Sampling)
Chain 4: 
Chain 4:  Elapsed Time: 0.09787 seconds (Warm-up)
Chain 4:                0.217986 seconds (Sampling)
Chain 4:                0.315856 seconds (Total)
Chain 4: 

SAMPLING FOR MODEL 'log_gdp ~ dnorm(mu, sigma)' NOW (CHAIN 1).
Chain 1: 
Chain 1: Gradient evaluation took 0.000798 seconds
Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 7.98 seconds.
Chain 1: Adjust your expectations accordingly!
Chain 1: 
Chain 1: 
Chain 1: WARNING: No variance estimation is
Chain 1:          performed for num_warmup < 20
Chain 1: 
Chain 1: Iteration: 1 / 1 [100%]  (Sampling)
Chain 1: 
Chain 1:  Elapsed Time: 2e-06 seconds (Warm-up)
Chain 1:                8.3e-05 seconds (Sampling)
Chain 1:                8.5e-05 seconds (Total)
Chain 1: 
Computing WAIC
Constructing posterior predictions
[ 320 / 3200 ]
[ 640 / 3200 ]
[ 960 / 3200 ]
[ 1280 / 3200 ]
[ 1600 / 3200 ]
[ 1920 / 3200 ]
[ 2240 / 3200 ]
[ 2560 / 3200 ]
[ 2880 / 3200 ]
[ 3200 / 3200 ]
Warning messages:
1: There were 1 divergent transitions after warmup. Increasing adapt_delta above 0.8 may help. See
http://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup 
2: Examine the pairs() plot to diagnose sampling problems

SAMPLING FOR MODEL 'log_gdp ~ dnorm(mu, sigma)' NOW (CHAIN 
SAMPLING3 FOR MODEL ').
log_gdp ~ dnorm(mu, sigma)' NOW (CHAIN 1).

SAMPLING FOR MODEL 'log_gdp ~ dnorm(mu, sigma)' NOW (CHAIN 2).
Chain 2: 
Chain 3: 
Chain 2: Gradient evaluation took 7.9e-05 seconds
Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.79 seconds.
Chain Chain 2: 3Adjust your expectations accordingly!: 
Chain Gradient evaluation took 9.1e-05 seconds2
: 
Chain Chain 23: : 
1000 transitions using 10 leapfrog steps per transition would take 0.91 seconds.
Chain 3: Adjust your expectations accordingly!
Chain 3: 
Chain 3: 
Chain 1: 
Chain 1: Gradient evaluation took 8.6e-05 seconds
Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.86 seconds.
Chain 1: Adjust your expectations accordingly!
Chain 1: 
Chain 1: 

SAMPLING FOR MODEL 'log_gdp ~ dnorm(mu, sigma)' NOW (CHAIN 4).
Chain 3: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 2: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 1: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 4: 
Chain 4: Gradient evaluation took 6.8e-05 seconds
Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.68 seconds.
Chain 4: Adjust your expectations accordingly!
Chain 4: 
Chain 4: 
Chain 4: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 1: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 3: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 4: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 2: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 1: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 3: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 4: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 2: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 3: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 1: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 4: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 2: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 3: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 1: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 4: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 2: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 3: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 3: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 1: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 4: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 1: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 4: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 2: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 2: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 3: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 4: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 2: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 1: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 3: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 4: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 2: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 1: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 3: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 4: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 2: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 1: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 3: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 4: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 2: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 1: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 3: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 3: 
Chain 3:  Elapsed Time: 0.234609 seconds (Warm-up)
Chain 3:                0.254201 seconds (Sampling)
Chain 3:                0.48881 seconds (Total)
Chain 3: 
Chain 4: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 4: 
Chain 4:  Elapsed Time: 0.239729 seconds (Warm-up)
Chain 4:                0.250852 seconds (Sampling)
Chain 4:                0.490581 seconds (Total)
Chain 4: 
Chain 2: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 2: 
Chain 2:  Elapsed Time: 0.249077 seconds (Warm-up)
Chain 2:                0.285494 seconds (Sampling)
Chain 2:                0.534571 seconds (Total)
Chain 2: 
Chain 1: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 1: 
Chain 1:  Elapsed Time: 0.24049 seconds (Warm-up)
Chain 1:                0.306096 seconds (Sampling)
Chain 1:                0.546586 seconds (Total)
Chain 1: 

SAMPLING FOR MODEL 'log_gdp ~ dnorm(mu, sigma)' NOW (CHAIN 1).
Chain 1: 
Chain 1: Gradient evaluation took 4.3e-05 seconds
Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.43 seconds.
Chain 1: Adjust your expectations accordingly!
Chain 1: 
Chain 1: 
Chain 1: WARNING: No variance estimation is
Chain 1:          performed for num_warmup < 20
Chain 1: 
Chain 1: Iteration: 1 / 1 [100%]  (Sampling)
Chain 1: 
Chain 1:  Elapsed Time: 2e-06 seconds (Warm-up)
Chain 1:                7e-05 seconds (Sampling)
Chain 1:                7.2e-05 seconds (Total)
Chain 1: 
Computing WAIC
Constructing posterior predictions
[ 400 / 4000 ]
[ 800 / 4000 ]
[ 1200 / 4000 ]
[ 1600 / 4000 ]
[ 2000 / 4000 ]
[ 2400 / 4000 ]
[ 2800 / 4000 ]
[ 3200 / 4000 ]
[ 3600 / 4000 ]
[ 4000 / 4000 ]
Warning messages:
1: In map2stan(m, chains = 4, cores = 4, warmup = 1000, iter = 1000) :
  'iter' less than or equal to 'warmup'. Setting 'iter' to sum of 'iter' and 'warmup' instead (2000).
2: There were 1 divergent transitions after warmup. Increasing adapt_delta above 0.8 may help. See
http://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup 
3: Examine the pairs() plot to diagnose sampling problems

       Mean StdDev lower 0.89 upper 0.89 n_eff         Rhat
a     -0.16  52.96     -90.51      39.87     2 3.439500e+14
bR     9.67   4.74       3.48      15.88     2 1.368092e+14
bA     1.43   7.41      -7.83      12.07     2 2.730022e+14
bAR   -7.01   4.64     -10.96       0.75     2 1.185151e+14
sigma 12.51  13.04       0.07      33.86   NaN 2.422987e+14
Warning message:
In precis(m.warmup1) :
  There were 3996 divergent iterations during sampling.
Check the chains (trace plots, n_eff, Rhat) carefully to ensure they are valid.

8H1

mp <- map2stan(
    alist(
        a ~ dnorm(0,1),
        b ~ dcauchy(0,1)
    ),
    data=list(y=1),
    start=list(a=0,b=0),
    iter=1e4, warmup=100 , WAIC=FALSE )

plot(mp)
#precis(mp)

The plot for Cauchy is not as stable, more extreme values. Rhat is also 1.01 and effective samples lower.

8H2

options(mc.cores = parallel::detectCores())
data(WaffleDivorce)
d <- WaffleDivorce

d$MedianAgeMarriage_s <- (d$MedianAgeMarriage-mean(d$MedianAgeMarriage))/
    sd(d$MedianAgeMarriage)

d$Marriage_s <- (d$Marriage - mean(d$Marriage))/sd(d$Marriage)

d_trim <- d[, c("Divorce", "MedianAgeMarriage_s", "Marriage_s")]


m5.1 <- map2stan(
    alist(
        Divorce ~ dnorm( mu , sigma ) ,
        mu <- a + bA * MedianAgeMarriage_s ,
        a ~ dnorm( 10 , 10 ) ,
        bA ~ dnorm( 0 , 1 ) ,
        sigma ~ dunif( 0 , 10 )
) , data = d_trim, chains=4, cores=4 )

m5.2 <- map2stan(
    alist(
        Divorce ~ dnorm( mu , sigma ) ,
        mu <- a + bR * Marriage_s ,
        a ~ dnorm( 10 , 10 ) ,
        bR ~ dnorm(0, 1),
        sigma ~ dunif(0, 10)
    ), data =d_trim, chains=4, cores=4)

m5.3 <- map2stan(
    alist(
        Divorce ~ dnorm( mu , sigma ) ,
        mu <- a + bR*Marriage_s + bA*MedianAgeMarriage_s ,
        a ~ dnorm( 10 , 10 ) ,
        bR ~ dnorm( 0 , 1 ) ,
        bA ~ dnorm( 0 , 1 ) ,
        sigma ~ dunif( 0 , 10 )
),
    data = d_trim, chains=4, cores=4)
compare(m5.1, m5.2, m5.3)

M5.1 best, than 5.2, most weight

rstan_options(auto_write =TRUE)

N <- 100
height <- rnorm(N,10,2)
leg_prop <- runif(N,0.4,0.5)
leg_left <- leg_prop*height +
    rnorm( N , 0 , 0.02 )
leg_right <- leg_prop*height +
    rnorm( N , 0 , 0.02 )
d <- data.frame(height,leg_left,leg_right)

m5.8s <- map2stan(
    alist(
        height ~ dnorm( mu , sigma ) ,
        mu <- a + bl*leg_left + br*leg_right ,
        a ~ dnorm( 10 , 100 ) ,
        bl ~ dnorm( 2 , 10 ) ,
        br ~ dnorm( 2 , 10 ) ,
        sigma ~ dcauchy( 0 , 1 )
),
data=d, chains=4, cores=4, start=list(a=10,bl=0,br=0,sigma=1) )
m5.8s2 <- map2stan(
    alist(
        height ~ dnorm( mu , sigma ) ,
        mu <- a + bl*leg_left + br*leg_right ,
        a ~ dnorm( 10 , 100 ) ,
        bl ~ dnorm( 2 , 10 ) ,
        br ~ dnorm( 2 , 10 ) & T[0,] ,
        sigma ~ dcauchy( 0 , 1 )
),
data=d, chains=4, cores=4, start=list(a=10,bl=0,br=0,sigma=1) )
pairs(m5.8s)
pairs(m5.8s2)

BL is pushed to the right and br to the left, due to to directed prior.

8H4

compare(m5.8s, m5.8s2)
precis(m5.8s)
precis(m5.8s2)

Less effective parameters, because of the split enforced by the directed prior. However, the rhat is too high for m5.8s2

8H5

num_weeks <- 1e5
positions <- rep(0,num_weeks)

islands <- data.frame(id=1:10, pop=sample(1:10, 10, replace=F))

current <- 10
for ( i in 1:num_weeks ) {
    # record current position
    positions[i] <- current
    # flip coin to generate proposal
    proposal <- current + sample( c(-1,1) , size=1 )
    # now make sure he loops around the archipelago
    if ( proposal < 1 ) proposal <- 10
    if ( proposal > 10 ) proposal <- 1
    # move?
    prob_move <- islands$pop[proposal]/islands$pop[current]
    current <- ifelse( runif(1) < prob_move , proposal , current )
}