Calibration Challenge — submit 10 posts (5 winners, 5 flops) and we'll score them blind #4
Pinned
CaptainFredric
started this conversation in
Show and tell
Replies: 1 comment
-
|
Submit Here! |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
ContentForge is built on a premise: deterministic heuristics can predict content performance better than gut feel. Before we claim that publicly on Product Hunt, we want to prove it.
Here's the challenge:
Submit 10 historical posts — 5 that performed well, 5 that flopped — without telling us which is which. We'll run them through the scoring engine and rank them. Then you tell us if we got it right.
How to participate
Copy this template into a comment:
DMs welcome if you'd rather keep your content private — results reported without attribution.
What you get
Why public calibration
The weights are based on platform best-practice documentation. Real engagement data is the ground truth. We'd rather find the gaps now than after PH launch.
Every submission goes into
/docs/validation.md. Results are posted openly.Running until we hit 20 submissions across ≥3 platforms. Questions? Drop them below.
Beta Was this translation helpful? Give feedback.
All reactions