Skip to content

Commit c93d778

Browse files
committed
add structure to the introduction
Signed-off-by: Nathaniel <NathanielF@users.noreply.github.com>
1 parent a5643fe commit c93d778

File tree

2 files changed

+40
-8
lines changed

2 files changed

+40
-8
lines changed

examples/case_studies/bayesian_sem_workflow.ipynb

Lines changed: 20 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -27,6 +27,22 @@
2727
"\n",
2828
"A further goal is to strengthen the foundation for SEM modeling in PyMC. We demonstrate how to use different sampling strategies, both conditional and marginal formulations, to accommodate mean structures and hierarchical effects. These extensions showcase the flexibility and expressive power of Bayesian SEMs.\n",
2929
"\n",
30+
"#### Structure of the Presentation\n",
31+
"\n",
32+
"- Workflow: Bayesian and SEM\n",
33+
" - Job Satisfaction Data\n",
34+
" - Mathematical Specification\n",
35+
"- Modelling\n",
36+
" - CFA\n",
37+
" - SEM Conditional Formulation\n",
38+
" - SEM Marginal Formulation\n",
39+
" - SEM Mean Structure Formulation\n",
40+
"- Parameter Recovery Models\n",
41+
" - SEM Hierarchical Formulation\n",
42+
" - SEM + Discrete Choice\n",
43+
"- Conclusion: Statistical Modelling and Craft\n",
44+
"\n",
45+
"\n",
3046
"### The Bayesian Workflow\n",
3147
"Recall the stages of the Bayesian workflow.\n",
3248
"\n",
@@ -111,7 +127,7 @@
111127
"id": "sapphire-yellow",
112128
"metadata": {},
113129
"source": [
114-
"## Job Satisfaction and Bayesian Workflows\n",
130+
"### Job Satisfaction and Bayesian Workflows\n",
115131
"\n",
116132
"The data we will examine for this case study is drawn from an example discussed by {cite:p}`vehkalahti2019multivariate` around the drivers of Job satisfaction. In particular the focus is on how Constructive thought strategies can impact job satisfaction. We have 12 related measures. \n",
117133
"\n",
@@ -360,7 +376,7 @@
360376
"id": "3690f464",
361377
"metadata": {},
362378
"source": [
363-
"## Mathematical Interlude\n",
379+
"### Mathematical Specification\n",
364380
"\n",
365381
"Before we turn to implementation, let’s formalize the model mathematically.\n",
366382
"\n",
@@ -432,7 +448,7 @@
432448
"id": "78194165",
433449
"metadata": {},
434450
"source": [
435-
"## Setting up Utility Functions\n",
451+
"### Setting up Utility Functions\n",
436452
"\n",
437453
"For this exercise we will lean on a range of utility functions to build and compare the expansionary sequence. These functions include repeated steps that will be required for any SEM model. These functions modularize the model-building process and make it easier to compare successive model expansions.\n",
438454
"\n",
@@ -654,7 +670,7 @@
654670
"\n",
655671
"![](cfa_excalidraw.png)\n",
656672
"\n",
657-
"In the model below we sample draws from the latent factors `eta` and relate them to the observables by the matrix computation `pt.dot(eta, Lambda.T)`. This computation results in a \"psuedo-observation\" matrix which we then feed through our likelihood to calibrate the latent structures against the observed dats. This is the general pattern we'll see in all models below. The covariances (i.e. red arrows) among the latent factors is determined with `chol`."
673+
"In the model below we sample draws from the latent factors `eta` and relate them to the observables by the matrix computation `pt.dot(eta, Lambda.T)`. This computation results in a \"psuedo-observation\" matrix which we then feed through our likelihood to calibrate the latent structures against the observed dats. The covariances (i.e. red arrows) among the latent factors is determined with `chol`. These are the general patterns we'll see in all models below, but we add complexity as we go."
658674
]
659675
},
660676
{

examples/case_studies/bayesian_sem_workflow.myst.md

Lines changed: 20 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -26,6 +26,22 @@ While both topics are well represented in the PyMC examples library, our goal he
2626

2727
A further goal is to strengthen the foundation for SEM modeling in PyMC. We demonstrate how to use different sampling strategies, both conditional and marginal formulations, to accommodate mean structures and hierarchical effects. These extensions showcase the flexibility and expressive power of Bayesian SEMs.
2828

29+
#### Structure of the Presentation
30+
31+
- Workflow: Bayesian and SEM
32+
- Job Satisfaction Data
33+
- Mathematical Specification
34+
- Modelling
35+
- CFA
36+
- SEM Conditional Formulation
37+
- SEM Marginal Formulation
38+
- SEM Mean Structure Formulation
39+
- Parameter Recovery Models
40+
- SEM Hierarchical Formulation
41+
- SEM + Discrete Choice
42+
- Conclusion: Statistical Modelling and Craft
43+
44+
2945
### The Bayesian Workflow
3046
Recall the stages of the Bayesian workflow.
3147

@@ -87,7 +103,7 @@ az.style.use("arviz-darkgrid")
87103
rng = np.random.default_rng(42)
88104
```
89105

90-
## Job Satisfaction and Bayesian Workflows
106+
### Job Satisfaction and Bayesian Workflows
91107

92108
The data we will examine for this case study is drawn from an example discussed by {cite:p}`vehkalahti2019multivariate` around the drivers of Job satisfaction. In particular the focus is on how Constructive thought strategies can impact job satisfaction. We have 12 related measures.
93109

@@ -198,7 +214,7 @@ Interestingly, the Bayesian workflow embodies the same constructive strategies i
198214

199215
+++
200216

201-
## Mathematical Interlude
217+
### Mathematical Specification
202218

203219
Before we turn to implementation, let’s formalize the model mathematically.
204220

@@ -266,7 +282,7 @@ We'll introduce each of these components are additional steps as we layer over t
266282

267283
+++
268284

269-
## Setting up Utility Functions
285+
### Setting up Utility Functions
270286

271287
For this exercise we will lean on a range of utility functions to build and compare the expansionary sequence. These functions include repeated steps that will be required for any SEM model. These functions modularize the model-building process and make it easier to compare successive model expansions.
272288

@@ -468,7 +484,7 @@ In this section, we translate the theoretical structure of a confirmatory factor
468484

469485
![](cfa_excalidraw.png)
470486

471-
In the model below we sample draws from the latent factors `eta` and relate them to the observables by the matrix computation `pt.dot(eta, Lambda.T)`. This computation results in a "psuedo-observation" matrix which we then feed through our likelihood to calibrate the latent structures against the observed dats. This is the general pattern we'll see in all models below. The covariances (i.e. red arrows) among the latent factors is determined with `chol`.
487+
In the model below we sample draws from the latent factors `eta` and relate them to the observables by the matrix computation `pt.dot(eta, Lambda.T)`. This computation results in a "psuedo-observation" matrix which we then feed through our likelihood to calibrate the latent structures against the observed dats. The covariances (i.e. red arrows) among the latent factors is determined with `chol`. These are the general patterns we'll see in all models below, but we add complexity as we go.
472488

473489
```{code-cell} ipython3
474490
with pm.Model(coords=coords) as cfa_model_v1:

0 commit comments

Comments
 (0)