Provide task-oriented recipes for common DSAMbayes operational workflows. Each guide starts from a user objective, gives minimal reproducible steps, and includes expected output artefacts and quick verification checks.
Audience
Users who know the concepts but need execution steps.
Metadata artefacts are written only if you supply --run-dir or set outputs.run_dir.
If validation fails:
Check the error message for missing data paths, invalid YAML keys, or formula errors.
Remember that the authored v2 schema does not expose model.formula; the runner compiles it from target, media, controls, and optional hierarchy / effects.
Fix the config and re-run validate before proceeding.
3. Run the model
Rscript scripts/dsambayes.R run --config config/cre_geo_panel.yaml
Expected outcome:
Exit code 0.
Full staged artefact tree under the run directory.
4. Locate the run directory
The runner prints the run directory path during execution. It follows the pattern:
results/YYYYMMDD_HHMMSS_<run_label>/
5. Verify artefacts
Check that the following stage folders are populated:
Read and act on the diagnostics report produced by a DSAMbayes runner execution, understanding which checks matter most and what remediation steps to take.
Prerequisites
A completed runner run execution with artefacts under 40_diagnostics/.
Compare multiple DSAMbayes runner executions and select a candidate model for reporting or decision-making, using predictive scoring and diagnostic summaries.
Prerequisites
Two or more completed runner run executions (MCMC fit method).
Artefacts under 50_model_selection/ for each run (LOO summary, ELPD outputs).
Observations with k > 0.7 indicate unreliable LOO estimates
If many observations have high Pareto-k values, the LOO approximation is unreliable for that run. Consider time-series cross-validation as an alternative.
4. Review time-series CV (if available)
If diagnostics.time_series_selection.enabled: true was configured, check:
This provides expanding-window blocked CV scores (holdout ELPD, RMSE, SMAPE) that are more appropriate for time-series data than standard LOO.
5. Cross-reference diagnostics
For each candidate run, check the diagnostics overall status:
head -1 results/<run_dir>/40_diagnostics/diagnostics_report.csv
A model with better ELPD but failing diagnostics should not be preferred over a model with slightly lower ELPD and passing diagnostics.
6. Compare fit quality visually
Review the fit time series and scatter plots in 20_model_fit/ for each run:
Fit time series — does the model track the observed KPI?
Fit scatter — is the predicted-vs-observed relationship close to the diagonal?
Posterior forest — are coefficient estimates reasonable and well-identified?
7. Selection decision matrix
Criterion
Weight
Run A
Run B
ELPD (higher is better)
High
value
value
Pareto-k reliability (fewer high-k)
High
value
value
Diagnostics overall status
High
pass/warn/fail
pass/warn/fail
TSCV holdout RMSE (if available)
Medium
value
value
Coefficient plausibility
Medium
judgement
judgement
Fit visual quality
Low
judgement
judgement
8. Record the selection
Document the selected run directory and rationale. If using the runner for release evidence, the selected run’s artefacts form part of the evidence pack.
Caveats
ELPD is not causal validation. Predictive scoring measures in-sample predictive quality, not whether the model identifies causal media effects correctly.
Pooled models do not support time-series CV (rejected by config validation).
MAP-fitted models do not produce LOO diagnostics. Use MCMC for model comparison.
Symptoms: CSV artefacts are present but PNG plot files are missing.
Error pattern
Cause
Fix
“cannot open connection” for PNG
Graphics device issue
Check that grDevices is available; ensure sufficient disk space
Plot function error for hierarchical model
Group-level coefficient draws are vectors, not scalars
This has been fixed in recent releases; ensure you are running the latest version
General debugging steps
Read the full error message. DSAMbayes uses cli::cli_abort() with descriptive messages that identify the failing function and parameter.
Check the resolved and compiled configs. If a run directory was created, inspect 00_run_metadata/config.resolved.yaml to see what defaults were applied and 00_run_metadata/config.compiled.yaml to see the internal runner config that was actually passed downstream.
Check session info. Inspect 00_run_metadata/session_info.txt for package version mismatches.
Clear the Stan cache. Stale compiled models can cause unexpected failures:
rm -rf .cache/dsambayes/
Run validate before run. Always validate first to catch config errors before committing to a full MCMC run.
Reduce iterations for debugging. Use a small fit.mcmc block or switch temporarily to fit.method: optimise to iterate quickly on schema and data issues.