Add Tutorial 19: dCDH for Marketing Pulse Campaigns#373
Merged
Conversation
|
Overall Assessment ✅ Looks good Executive Summary
Methodology
Code Quality
Performance
Maintainability
Tech Debt
Security
Documentation/Tests
|
Owner
Author
|
/ai-review |
|
🔁 AI review rerun (requested by @igerber) Head SHA: Overall Assessment Executive Summary
Methodology
Code Quality
Performance
Maintainability
Tech Debt
Security
Documentation/Tests
Path to Approval
|
End-to-end practitioner walkthrough on a 60-market reversible-treatment panel covering: the AER 2020 Theorem 1 TWFE decomposition diagnostic via twowayfeweights, DCDH Phase 1 (DID_M, joiners-vs-leavers, single-lag placebo, TWFE diagnostic block), the L_max multi-horizon event study with multiplier bootstrap, a stakeholder-communication template with explicit per-bullet source mapping, and drift guards. The tutorial leans into dCDH's distinguishing feature - it works on panels with no never-treated and no always-treated units (only switchers), because identification rests on contemporaneously-stable cells rather than a permanent never-treated comparison group. Doc edits beyond the notebook: - README backfills the missing Tutorial 17 (Brand Awareness Survey) entry alongside the new Tutorial 19 entry - docs/doc-deps.yaml wires the notebook into the dCDH dependency list so /docs-impact flags it on future estimator changes - docs/practitioner_decision_tree.rst adds a tip cross-link in the Reversible Treatment section (mirrors the T17/T18 cross-link form) - CHANGELOG [Unreleased] entry under Added Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…ilter
- Fix placebo horizon range from `l = -1..-L+1` to `l = -L..-1`
(matches the implementation: at L_max=2 horizons are l = -2 and l = -1)
- Scope "the only Python estimator" claims to "diff-diff's only estimator"
in Section 1 abstract and the stakeholder template - REGISTRY.md asserts
uniqueness in the library, not across all of Python
- Rewrite the Section 4 cross-estimator paragraph to enumerate the actual
staggered estimators in diff-diff (CallawaySantAnna, SunAbraham,
WooldridgeETWFE, ImputationDiD, TwoStageDiD, EfficientDiD) and frame
the comparison around the absorbing-treatment restriction rather than
"needs a never-treated cohort that survives to the end of the panel"
- Narrow the L_max=2 fit warning filter from `simplefilter("ignore",
UserWarning)` to `filterwarnings("ignore", message=r"Assumption 7 .*",
category=UserWarning)` so only the expected leavers-present warning
is silenced; any new / unexpected UserWarning will surface and keep
the notebook usable as a drift detector
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The Section 4 paragraph previously inferred Assumption 11 satisfaction from whole-panel stable-cell totals (154 stable_0, 206 stable_1). That is the wrong check - A11 is per-period: at every switching period, at least one stable cell of the relevant type must exist. Rewrite the closing sentences of the "Where do the controls come from?" paragraph to (a) make the per-period nature of the check explicit, (b) reference the library's fit-time A11 warning machinery as the correct verification mechanism, (c) note that our fit ran without any A11 warning, and (d) explain why single-switch panels naturally tend to satisfy A11 (adjacent cohorts function as stable controls for each other). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Owner
Author
|
/ai-review |
|
🔁 AI review rerun (requested by @igerber) Head SHA: Overall Assessment ✅ Looks good Executive Summary
Methodology
Code Quality
Performance
Maintainability
Tech Debt
Security
Documentation/Tests
|
Bullet 1 of the closing summary previously asserted "No other modern Python DiD estimator handles this case" - broader than the library's own documented contract (REGISTRY only asserts uniqueness in diff-diff) and a claim that can go stale independently of the library. Drop the comparison sentence entirely. The reader has the within-library positioning from Section 1 and Section 4; the closing checklist doesn't need an ecosystem-wide claim to make its point. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Owner
Author
|
/ai-review |
|
🔁 AI review rerun (requested by @igerber) Head SHA: Overall Assessment✅ Looks good Executive Summary
Methodology
Code Quality
Performance
Maintainability
Tech Debt
Security
Documentation/Tests
|
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
docs/tutorials/19_dcdh_marketing_pulse.ipynb(35 cells, 12 code / 23 markdown, ~3s nbmake) — practitioner walkthrough ofChaisemartinDHaultfoeuille(aliasDCDH) on a 60-market reversible-treatment paneltwowayfeweights) and the multi-horizon event study with multiplier bootstrap (L_max,n_bootstrap)docs/tutorials/README.md(was missing)docs/doc-deps.yamladds the notebook to thechaisemartin_dhaultfoeuille.pydependency list;docs/practitioner_decision_tree.rstadds a.. tip::cross-link in the Reversible Treatment section[Unreleased]entryMethodology references (required if estimator / math changes)
DID_M(AER 2020) andDID_l(NBER WP 29873 dynamic companion)diff_diff/,rust/src/, ordocs/methodology/modified. The tutorial uses the existing public API (DCDH,twowayfeweights,generate_reversible_did_data) and surfaces the documented Phase 1 + L_max + bootstrap contracts already locked indocs/methodology/REGISTRY.md§ ChaisemartinDHaultfoeuille.Validation
pytest --nbmake docs/tutorials/19_dcdh_marketing_pulse.ipynbpasses in ~3s. The notebook ships with cleared outputs; nbmake re-executes end-to-end and the final drift-guard cell printsAll drift guards passed.t19-cell-031) lock the narrative numbers:overall_att ∈ [10.72, 11.72]and CI covers truth,|placebo_effect| < 1.5,twfe_fraction_negative >= 0.10,event_study_effects[1]["effect"] ∈ [10.24, 12.24]. Tolerances chosen wide enough to absorb numerical noise, tight enough to detect material drift.seed=53,n_groups=60,n_periods=8,pattern="single_switch",heterogeneous_effects=True,effect_sd=4.0,L_max=2,n_bootstrap=199(matchesci_params.bootstrapconvention). Selection criteria: TWFEfraction_negative >= 0.10and bias visible, dCDH ATT covers truth, both joiners and leavers contribute, multi-horizon event-study CIs cover truth and placebos cover 0.Security / privacy
🤖 Generated with Claude Code