You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: examples/case_studies/bayesian_sem_workflow.ipynb
+20-4Lines changed: 20 additions & 4 deletions
Original file line number
Diff line number
Diff line change
@@ -27,6 +27,22 @@
27
27
"\n",
28
28
"A further goal is to strengthen the foundation for SEM modeling in PyMC. We demonstrate how to use different sampling strategies, both conditional and marginal formulations, to accommodate mean structures and hierarchical effects. These extensions showcase the flexibility and expressive power of Bayesian SEMs.\n",
29
29
"\n",
30
+
"#### Structure of the Presentation\n",
31
+
"\n",
32
+
"- Workflow: Bayesian and SEM\n",
33
+
" - Job Satisfaction Data\n",
34
+
" - Mathematical Specification\n",
35
+
"- Modelling\n",
36
+
" - CFA\n",
37
+
" - SEM Conditional Formulation\n",
38
+
" - SEM Marginal Formulation\n",
39
+
" - SEM Mean Structure Formulation\n",
40
+
"- Parameter Recovery Models\n",
41
+
" - SEM Hierarchical Formulation\n",
42
+
" - SEM + Discrete Choice\n",
43
+
"- Conclusion: Statistical Modelling and Craft\n",
44
+
"\n",
45
+
"\n",
30
46
"### The Bayesian Workflow\n",
31
47
"Recall the stages of the Bayesian workflow.\n",
32
48
"\n",
@@ -111,7 +127,7 @@
111
127
"id": "sapphire-yellow",
112
128
"metadata": {},
113
129
"source": [
114
-
"## Job Satisfaction and Bayesian Workflows\n",
130
+
"### Job Satisfaction and Bayesian Workflows\n",
115
131
"\n",
116
132
"The data we will examine for this case study is drawn from an example discussed by {cite:p}`vehkalahti2019multivariate` around the drivers of Job satisfaction. In particular the focus is on how Constructive thought strategies can impact job satisfaction. We have 12 related measures. \n",
117
133
"\n",
@@ -360,7 +376,7 @@
360
376
"id": "3690f464",
361
377
"metadata": {},
362
378
"source": [
363
-
"## Mathematical Interlude\n",
379
+
"### Mathematical Specification\n",
364
380
"\n",
365
381
"Before we turn to implementation, let’s formalize the model mathematically.\n",
366
382
"\n",
@@ -432,7 +448,7 @@
432
448
"id": "78194165",
433
449
"metadata": {},
434
450
"source": [
435
-
"## Setting up Utility Functions\n",
451
+
"### Setting up Utility Functions\n",
436
452
"\n",
437
453
"For this exercise we will lean on a range of utility functions to build and compare the expansionary sequence. These functions include repeated steps that will be required for any SEM model. These functions modularize the model-building process and make it easier to compare successive model expansions.\n",
438
454
"\n",
@@ -654,7 +670,7 @@
654
670
"\n",
655
671
"\n",
656
672
"\n",
657
-
"In the model below we sample draws from the latent factors `eta` and relate them to the observables by the matrix computation `pt.dot(eta, Lambda.T)`. This computation results in a \"psuedo-observation\" matrix which we then feed through our likelihood to calibrate the latent structures against the observed dats. This is the general pattern we'll see in all models below. The covariances (i.e. red arrows) among the latent factors is determined with `chol`."
673
+
"In the model below we sample draws from the latent factors `eta` and relate them to the observables by the matrix computation `pt.dot(eta, Lambda.T)`. This computation results in a \"psuedo-observation\" matrix which we then feed through our likelihood to calibrate the latent structures against the observed dats. The covariances (i.e. red arrows) among the latent factors is determined with `chol`. These are the general patterns we'll see in all models below, but we add complexity as we go."
Copy file name to clipboardExpand all lines: examples/case_studies/bayesian_sem_workflow.myst.md
+20-4Lines changed: 20 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -26,6 +26,22 @@ While both topics are well represented in the PyMC examples library, our goal he
26
26
27
27
A further goal is to strengthen the foundation for SEM modeling in PyMC. We demonstrate how to use different sampling strategies, both conditional and marginal formulations, to accommodate mean structures and hierarchical effects. These extensions showcase the flexibility and expressive power of Bayesian SEMs.
28
28
29
+
#### Structure of the Presentation
30
+
31
+
- Workflow: Bayesian and SEM
32
+
- Job Satisfaction Data
33
+
- Mathematical Specification
34
+
- Modelling
35
+
- CFA
36
+
- SEM Conditional Formulation
37
+
- SEM Marginal Formulation
38
+
- SEM Mean Structure Formulation
39
+
- Parameter Recovery Models
40
+
- SEM Hierarchical Formulation
41
+
- SEM + Discrete Choice
42
+
- Conclusion: Statistical Modelling and Craft
43
+
44
+
29
45
### The Bayesian Workflow
30
46
Recall the stages of the Bayesian workflow.
31
47
@@ -87,7 +103,7 @@ az.style.use("arviz-darkgrid")
87
103
rng = np.random.default_rng(42)
88
104
```
89
105
90
-
## Job Satisfaction and Bayesian Workflows
106
+
###Job Satisfaction and Bayesian Workflows
91
107
92
108
The data we will examine for this case study is drawn from an example discussed by {cite:p}`vehkalahti2019multivariate` around the drivers of Job satisfaction. In particular the focus is on how Constructive thought strategies can impact job satisfaction. We have 12 related measures.
93
109
@@ -198,7 +214,7 @@ Interestingly, the Bayesian workflow embodies the same constructive strategies i
198
214
199
215
+++
200
216
201
-
## Mathematical Interlude
217
+
###Mathematical Specification
202
218
203
219
Before we turn to implementation, let’s formalize the model mathematically.
204
220
@@ -266,7 +282,7 @@ We'll introduce each of these components are additional steps as we layer over t
266
282
267
283
+++
268
284
269
-
## Setting up Utility Functions
285
+
###Setting up Utility Functions
270
286
271
287
For this exercise we will lean on a range of utility functions to build and compare the expansionary sequence. These functions include repeated steps that will be required for any SEM model. These functions modularize the model-building process and make it easier to compare successive model expansions.
272
288
@@ -468,7 +484,7 @@ In this section, we translate the theoretical structure of a confirmatory factor
468
484
469
485

470
486
471
-
In the model below we sample draws from the latent factors `eta` and relate them to the observables by the matrix computation `pt.dot(eta, Lambda.T)`. This computation results in a "psuedo-observation" matrix which we then feed through our likelihood to calibrate the latent structures against the observed dats. This is the general pattern we'll see in all models below. The covariances (i.e. red arrows) among the latent factors is determined with `chol`.
487
+
In the model below we sample draws from the latent factors `eta` and relate them to the observables by the matrix computation `pt.dot(eta, Lambda.T)`. This computation results in a "psuedo-observation" matrix which we then feed through our likelihood to calibrate the latent structures against the observed dats. The covariances (i.e. red arrows) among the latent factors is determined with `chol`. These are the general patterns we'll see in all models below, but we add complexity as we go.
0 commit comments