SSVC decision points as elimination, not just selection #1063
ahouseholder
started this conversation in
Ideas
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I want to start a discussion about how we reason over SSVC decision points, and a common assumption I see in practice: that each decision point must be answered by selecting exactly one value, and that anything short of that is a failure or a blocker.
That framing is unnecessarily restrictive and slightly at odds with SSVC’s own design philosophy.
SSVC is explicitly built as a satisficing process. The documentation says:
If adequacy—not optimality—is the goal, then demanding certainty at every decision point is the wrong objective.
A more accurate way to think about SSVC decision points—one that is already supported by the data structures and tooling—is elimination of known-false answers. If, in a single round of analysis, you can eliminate all values but one, you should absolutely do that. But if you cannot, eliminating some values is still real progress.
This is analogous to a test-taking strategy for multiple-choice exams (SAT, GRE, etc.): you may not know the correct answer, but ruling out answers that are definitely wrong still produces meaningful information gain even if you are left to guess from the remaining choices.
As Sherlock Holmes put it in The Sign of the Four:
Iteration and satisficing, not forced certainty
SSVC assessment is inherently built to be iterative. Decision points can be revisited as new evidence becomes available. The stopping condition is not “when we know everything,” but when the analysis is sufficient for the decision at hand.
Concretely, assessment of a decision point can stop when either:
1. All values except one have been eliminated; or
2. Enough values have been eliminated that no remaining distinction can change the outcome of the decision table.
Both are valid terminal states under a satisficing framework.
This allows analysis to proceed to the depth necessary to resolve outcomes, while explicitly permitting early stopping once additional precision is no longer informative.
Why elimination matters operationally
Reasoning by elimination enables several practical behaviors that align directly with satisficing:
This is satisficing in practice: stop when the result is adequately determined.
Implications for LLM-assisted analysis
This framing is especially important for LLM-based analysis assistants. Requiring an LLM to produce the single correct value for a decision point may be an unnecessarily high bar. Helping to eliminate incorrect values is both more attainable and more useful.
An assistant that can narrow the possibility space:
An LLM-based analysis tool might iteratively use the reduced possibility space to develop a plan to collect the information necessary to discriminate outcomes in the context of the decision being modeled, refining the information collected and analyzed until the outcome becomes clear. In many decision models, reducing the decision space to a single outcome happens without necessarily requiring singular answers to each individual decision point. From an analytical standpoint, every question you don’t have to answer saves some combination of time, cost, or effort.
This is already in SSVC—just not always foregrounded
None of this requires changes to SSVC. The existing selection object already allows assessors to eliminate incorrect values and retain all remaining plausible ones, and the SSVC calculator already operates correctly over those sets.
What I’m trying to surface is that this mode of reasoning is intentional, consistent with SSVC’s satisficing philosophy, and operationally powerful—but not always obvious in how people apply the framework.
I’m interested in discussion around:
Looking forward to hearing others’ perspectives.
Beta Was this translation helpful? Give feedback.
All reactions