Skip to contents

Displays a summary of a fitted Bayesian Neural Network (BNN) model, including the function call and the Stan fit details.

Usage

# S3 method for class 'bnns'
print(x, ...)

Arguments

x

An object of class "bnns", typically the result of a call to bnns.default.

...

Additional arguments (currently not used).

Value

The function is called for its side effects and does not return a value. It prints the following:

  • The function call used to generate the "bnns" object.

  • A summary of the Stan fit object stored in x$fit.

See also

Examples

# \donttest{
# Example usage:
data <- data.frame(x1 = runif(10), x2 = runif(10), y = rnorm(10))
model <- bnns(y ~ -1 + x1 + x2,
  data = data, L = 1, nodes = 2, act_fn = 2,
  iter = 1e1, warmup = 5, chains = 1
)
#> 
#> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1).
#> Chain 1: 
#> Chain 1: Gradient evaluation took 1.8e-05 seconds
#> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.18 seconds.
#> Chain 1: Adjust your expectations accordingly!
#> Chain 1: 
#> Chain 1: 
#> Chain 1: WARNING: No variance estimation is
#> Chain 1:          performed for num_warmup < 20
#> Chain 1: 
#> Chain 1: Iteration: 1 / 10 [ 10%]  (Warmup)
#> Chain 1: Iteration: 2 / 10 [ 20%]  (Warmup)
#> Chain 1: Iteration: 3 / 10 [ 30%]  (Warmup)
#> Chain 1: Iteration: 4 / 10 [ 40%]  (Warmup)
#> Chain 1: Iteration: 5 / 10 [ 50%]  (Warmup)
#> Chain 1: Iteration: 6 / 10 [ 60%]  (Sampling)
#> Chain 1: Iteration: 7 / 10 [ 70%]  (Sampling)
#> Chain 1: Iteration: 8 / 10 [ 80%]  (Sampling)
#> Chain 1: Iteration: 9 / 10 [ 90%]  (Sampling)
#> Chain 1: Iteration: 10 / 10 [100%]  (Sampling)
#> Chain 1: 
#> Chain 1:  Elapsed Time: 0 seconds (Warm-up)
#> Chain 1:                0 seconds (Sampling)
#> Chain 1:                0 seconds (Total)
#> Chain 1: 
print(model)
#> Call:
#> bnns.default(formula = y ~ -1 + x1 + x2, data = data, L = 1, 
#>     nodes = 2, act_fn = 2, iter = 10, warmup = 5, chains = 1)
#> 
#> Stan fit:
#> Inference for Stan model: anon_model.
#> 1 chains, each with iter=10; warmup=5; thin=1; 
#> post-warmup draws per chain=5, total post-warmup draws=5.
#> 
#>            mean se_mean   sd   2.5%    25%    50%   75% 97.5% n_eff Rhat
#> w1[1,1]   -0.22    0.55 1.04  -1.41  -0.96  -0.26  0.51  1.01     3 0.85
#> w1[1,2]   -0.42    0.54 1.01  -1.63  -1.08  -0.48  0.44  0.69     3 0.71
#> w1[2,1]    0.35    0.10 0.20   0.17   0.21   0.24  0.53  0.58     3 1.20
#> w1[2,2]   -0.31    0.84 1.57  -1.69  -1.50  -0.88  0.48  1.89     3 2.17
#> b1[1]      0.44    0.90 1.69  -1.83  -0.70   1.27  1.62  1.92     3 3.01
#> b1[2]      0.24    0.60 1.12  -1.21  -0.14   0.22  0.77  1.61     3 1.50
#> w_out[1]  -0.13    0.24 0.45  -0.79  -0.17  -0.02  0.04  0.31     3 1.22
#> w_out[2]   0.38    0.17 0.32   0.18   0.22   0.22  0.36  0.88     3 1.32
#> b_out      0.53    0.10 0.19   0.32   0.42   0.44  0.70  0.74     3 0.75
#> sigma      1.24    0.10 0.18   0.98   1.16   1.30  1.36  1.41     3 0.74
#> lp__     -10.33    1.20 2.25 -13.08 -11.62 -10.14 -9.35 -7.51     3 0.72
#> 
#> Samples were drawn using NUTS(diag_e) at Tue Jan 14 03:41:18 2025.
#> For each parameter, n_eff is a crude measure of effective sample size,
#> and Rhat is the potential scale reduction factor on split chains (at 
#> convergence, Rhat=1).
# }