On the perceived need to sort within outcome categories #950
ahouseholder
started this conversation in
Ideas
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Background
In the SSVC documentation, we treat outcome values as equivalence sets. That is, we make no attempt to differentiate between two work units (reports, patches, etc.) that end up in the same outcome category (Immediate, Track, Publish, etc., depending on which decision model is being applied).
However, we've received feedback from some SSVC users (or those who are interested but not yet committed to becoming SSVC users) that the existing SSVC outcomes are too coarse-grained for their vulnerability management workflows and so there is a perceived need to sort within SSVC categories for additional prioritization. We have found this to be a common concern when using SSVC in large organizations with many vulnerabilities to manage.
What to do about it
I think there are four directions one can go from this. The ideas that follow are not meant to be mutually exclusive, and in fact could be combined in different ways to address stakeholder concerns.
Note
TL;DR: As a preview, the approaches are:
I'm going to provide a bit more illustration for each of these below.
Re-evaluate needs
Without intending to sound flip, I wonder if there isn't a tendency for folks coming from other prioritization schemes (many of which express their outcomes as numbers) to become accustomed to the idea that numbers (especially those with multiple significant digits) are inherently sortable, and that the sort contains important information over and above categorical boundaries.
I'm not going to go into arguments about intervals vs binning or whether CVSS has error bars here. In fact, I don't really care that the example I chose was CVSS. Any oracle function that returns a number drawn from a relatively large pool of possible scores would have the same problem. (The exception being scores built from measurements that have some objective basis, but I don't know what the SI units are for vulnerability.)
So the question that our hypothetical SSVC user should answer for themselves here is:
Do you really need to sort within a given SSVC outcome category, or have you become acclimated to using a numeric score and sorting was a convenient side effect?
Perhaps, on further consideration, the user might conclude that it's possible to try an SSVC-based workflow (potentially applying some of the tuning suggested below) to reduce the perceived need to further discriminate within categorical outcomes. It can be a bit of an adjustment in mindset, but we've seen organizations do this successfully.
Refine the chosen outcome decision point to adjust granularity
The user's perceived need to sort within an SSVC outcome category may indicate that the outcome categories are in fact too coarse-grained for their needs. Users might consider adjusting their decision model to produce more granular outcomes. For example, they could create a new outcome decision point that has 7 levels instead of 4, and then re-map their existing Decision Table's value combinations onto the new 7-way outcome. We've seen organizations do this successfully as well.
The user question here then is: Do I need a higher-resolution outcome set to adequately reflect my desired decision model?
Add another input decision point to better discriminate between outcome states
The user's perceived need to sort within an SSVC outcome category might indicate that, while they're generally okay with the outcome categories, they recognize that there is additional information that they're using to discriminate between items in a single outcome category. In this scenario, it may be possible for the user to adequately express that additional information in the form of an additional decision point—or more than one, but I suggest we treat this as an iterative process to maintain the satisficing principle of making adequate, not optimal decisions. Regardless, adding a decision point changes the structure of the Decision Table, so it will also require the mapping to be revisited. The downside of this is that adding Decision Points on the input side has a multiplicative impact on the size of the resulting Decision Table.
Here, the user question is: Is there information I could capture in my decision model by adding another (new or existing) decision point to the input model?
Create an upstream Decision Table to refine an existing input decision point value selection
In the same way that we combine Safety Impact and Mission Impact into Human Impact—we've referred to these in the past as compound decision points, but they're really just small Decision Tables that allow decision points to be combined into other decision point values–it's possible for SSVC users to create upstream decision tables to refine their selection process for an existing decision point. Creating small-scale refinement decision tables can also increase resolution on the input side to an existing decision model without causing the original decision table to blow up in size. Taken to the extreme, this could become a data mapping exercise but I'm not suggesting most folks would need to go that far out of the gate.
The user question here is: Can I combine additional categorical information upstream of my existing decision model to capture a more refined value selection?
Use outside data to sort within categories
In this approach, the SSVC user just takes the output of their SSVC decision model, and uses some additional source of sortable values to provide sorting over the work items in a given category. Comparatively speaking, this approach has very low upfront costs—there is no adaptation to the decision model required, you're just marrying SSVC and whatever extra data source, and you just sort within each category using the value from the external source.
Summary: An iterative process?
I think this flow chart captures the process I'm picturing based on the above.
Conclusion
Here I've outlined 5 ways to address the perceived need to sort within categories. One just requires re-evaluating expectations, three adjust the information within the decision model, and one augments the decision model with outside data without altering the model.
Requesting feedback
What do folks think about this? Are there other options I've missed or neglected?
Beta Was this translation helpful? Give feedback.
All reactions