-- :material-television-shimmer:{ .lg .middle } __Get Started with SSVC__
+- :material-television-shimmer:{ .lg .middle } **Get Started with SSVC**
---
@@ -20,7 +19,7 @@ We have organized the SSVC documentation into four main sections:
[:octicons-arrow-right-24: Learning SSVC](tutorials/index.md)
-- :material-clipboard-check:{ .lg .middle } __SSVC How To__
+- :material-clipboard-check:{ .lg .middle } **SSVC How To**
---
@@ -29,7 +28,7 @@ We have organized the SSVC documentation into four main sections:
[:octicons-arrow-right-24: SSVC How To](howto/index.md)
-- :fontawesome-solid-book:{ .lg .middle } __Learn More about SSVC__
+- :fontawesome-solid-book:{ .lg .middle } **Learn More about SSVC**
---
@@ -38,7 +37,7 @@ We have organized the SSVC documentation into four main sections:
[:octicons-arrow-right-24: Understanding SSVC](topics/index.md)
-- :material-book-open-page-variant:{ .lg .middle } __SSVC Reference__
+- :material-book-open-page-variant:{ .lg .middle } **SSVC Reference**
---
@@ -49,5 +48,4 @@ We have organized the SSVC documentation into four main sections:
-
{% include-markdown "_includes/helping_out.md" heading-offset=1 %}
diff --git a/docs/reference/code/analyze_csv.md b/docs/reference/code/analyze_csv.md
index 1f47e1ab..8bee1a2c 100644
--- a/docs/reference/code/analyze_csv.md
+++ b/docs/reference/code/analyze_csv.md
@@ -1,4 +1,3 @@
# SSVC CSV Analyzer
::: ssvc.csv_analyzer
-
diff --git a/docs/reference/code/doctools.md b/docs/reference/code/doctools.md
index edd3b5e0..a589b962 100644
--- a/docs/reference/code/doctools.md
+++ b/docs/reference/code/doctools.md
@@ -1,4 +1,3 @@
# Doctools
::: ssvc.doctools
-
diff --git a/docs/reference/code/index.md b/docs/reference/code/index.md
index 726664c0..8f2f47ad 100644
--- a/docs/reference/code/index.md
+++ b/docs/reference/code/index.md
@@ -6,4 +6,4 @@ These include:
- [CSV Analyzer](analyze_csv.md)
- [Policy Generator](policy_generator.md)
- [Outcomes](outcomes.md)
-- [Doctools](doctools.md)
\ No newline at end of file
+- [Doctools](doctools.md)
diff --git a/docs/reference/code/policy_generator.md b/docs/reference/code/policy_generator.md
index fa6e8477..4520599f 100644
--- a/docs/reference/code/policy_generator.md
+++ b/docs/reference/code/policy_generator.md
@@ -5,5 +5,4 @@ policy (a decision tree) from a set of input parameters.
It is intended to be used as a library, for example within a Jupyter notebook.
-
-::: ssvc.policy_generator
\ No newline at end of file
+::: ssvc.policy_generator
diff --git a/docs/reference/decision_points/automatable.md b/docs/reference/decision_points/automatable.md
index 171c7cbb..f3b1cedd 100644
--- a/docs/reference/decision_points/automatable.md
+++ b/docs/reference/decision_points/automatable.md
@@ -1,6 +1,5 @@
# Automatable
-
```python exec="true" idprefix=""
from ssvc.decision_points.automatable import LATEST
from ssvc.doc_helpers import example_block
@@ -8,13 +7,11 @@ from ssvc.doc_helpers import example_block
print(example_block(LATEST))
```
-
!!! tip "See also"
Automatable combines with [Value Density](./value_density.md) to inform
[Utility](./utility.md)
-
*Automatable* captures the answer to the question “Can an attacker reliably automate creating exploitation events for this vulnerability?”
!!! question "What are Steps 1-4 of the Kill Chain?"
@@ -29,13 +26,11 @@ print(example_block(LATEST))
2. weaponization may require human direction for each target
3. delivery may require channels that widely deployed network security configurations block
4. exploitation is not reliable, due to exploit-prevention techniques (e.g., ASLR) enabled by default
-
!!! question "When is Automatable *yes*?"
If the vulnerability allows remote code execution or command injection, the expected response should be yes.
-
Due to vulnerability chaining, there is some nuance as to whether reconnaissance can be automated.
!!! example "Vulnerability Chaining"
@@ -78,4 +73,3 @@ for version in versions:
*Virulence* is superseded by *Automatable*, which clarified the concept we
we were attempting to capture.
-
\ No newline at end of file
diff --git a/docs/reference/decision_points/compound_decision_points.md b/docs/reference/decision_points/compound_decision_points.md
index cca71dfe..43f0e385 100644
--- a/docs/reference/decision_points/compound_decision_points.md
+++ b/docs/reference/decision_points/compound_decision_points.md
@@ -7,4 +7,3 @@ Examples of compound decision points include:
- [Human Impact](human_impact.md)
- [Public Safety Impact](public_safety_impact.md)
- [Utility](utility.md)
-
diff --git a/docs/reference/decision_points/exploitation.md b/docs/reference/decision_points/exploitation.md
index bed76396..478d4033 100644
--- a/docs/reference/decision_points/exploitation.md
+++ b/docs/reference/decision_points/exploitation.md
@@ -1,5 +1,4 @@
-# Exploitation
-
+# Exploitation
```python exec="true" idprefix=""
from ssvc.decision_points.exploitation import LATEST
@@ -36,10 +35,8 @@ The intent of this measure is the present state of exploitation of the vulnerabi
## CWE-IDs for *PoC*
-
The table below lists CWE-IDs that could be used to mark a vulnerability as *PoC* if the vulnerability is described by the CWE-ID.
-
!!! example "CWE-295"
For example, [CWE-295 Improper Certificate Validation
diff --git a/docs/reference/decision_points/human_impact.md b/docs/reference/decision_points/human_impact.md
index 7f970a6b..04057d11 100644
--- a/docs/reference/decision_points/human_impact.md
+++ b/docs/reference/decision_points/human_impact.md
@@ -15,7 +15,7 @@ print(example_block(LATEST))
Note: This is a compound decision point[^1], therefore it is a notational convenience.
*Human Impact* is a combination of how a vulnerability can affect an organization's mission essential functions as well as
-safety considerations, whether for the organization's personnel or the public at large.
+safety considerations, whether for the organization's personnel or the public at large.
We observe that the day-to-day operations of an organization often have already built in a degree of tolerance to small-scale variance in mission impacts.
Thus in our opinion we need only concern ourselves with discriminating well at the upper end of the scale.
Therefore we combine the two lesser mission impacts of degraded and MEF support crippled into a single category, while retaining the distinction between MEF Failure and Mission Failure at the extreme.
@@ -30,10 +30,9 @@ The mapping is shown in the table above.
[^1]: In pilot implementations of SSVC, we received feedback that organizations tend to think of mission and safety impacts as
if they were combined into a single factor: in other words, the priority increases regardless which of the two impact factors was increased.
We therefore combine [Safety Impact](safety_impact.md) and
-[Mission Impact](mission_impact.md) for deployers into a single _Human Impact_ factor
+[Mission Impact](mission_impact.md) for deployers into a single *Human Impact* factor
as a dimension reduction step.
-
## Safety and Mission Impact Decision Points for Industry Sectors
We expect to encounter diversity in both safety and mission impacts across different organizations.
@@ -45,7 +44,6 @@ provide SSVC information tailored as appropriate to their constituency's safety
For considerations on how organizations might communicate SSVC information to their constituents,
see [Guidance on Communicating Results](../../howto/bootstrap/use.md).
-
## Prior Versions
```python exec="true" idprefix=""
diff --git a/docs/reference/decision_points/index.md b/docs/reference/decision_points/index.md
index ec64f9a7..1f002796 100644
--- a/docs/reference/decision_points/index.md
+++ b/docs/reference/decision_points/index.md
@@ -41,7 +41,7 @@ decision points.
Sometimes this is a "better" or "worse" dimension, but it seems to generalize to
a "more likely to act" or "less likely to act" of dimension.
-!!! question "Where are the _Unknown_ options?"
+!!! question "Where are the *Unknown* options?"
One important omission from the values for each category is an *unknown* option.
Instead, we recommend explicitly identifying an option that is a reasonable assumption based on prior events.
diff --git a/docs/reference/decision_points/mission_impact.md b/docs/reference/decision_points/mission_impact.md
index 9af10310..f4aa3a48 100644
--- a/docs/reference/decision_points/mission_impact.md
+++ b/docs/reference/decision_points/mission_impact.md
@@ -12,31 +12,30 @@ print(example_block(LATEST))
Mission Impact combines with [Safety Impact](./safety_impact.md) to inform
[Human Impact](./human_impact.md)
-A **mission essential function (MEF)** is a function “directly related to accomplishing the organization’s mission as set forth in its statutory or executive charter” [@FCD2_2017, page A-1].
-Identification and prioritization of mission essential functions enables effective continuity planning or crisis planning.
+A **mission essential function (MEF)** is a function “directly related to accomplishing the organization’s mission as set forth in its statutory or executive charter” [@FCD2_2017, page A-1].
+Identification and prioritization of mission essential functions enables effective continuity planning or crisis planning.
Mission Essential Functions are in effect critical activities within an organization that are used to identify key assets, supporting tasks, and resources that an organization requires to remain operational in a crises situation, and so must be included in its planning process.
During an event, key resources may be limited and personnel may be unavailable, so organizations must consider these factors and validate assumptions when identifying, validating, and prioritizing MEFs.
-When reviewing the list of organizational functions, an organization must first identify whether a function is essential or non-essential.
-The distinction between these two categories is whether or not an organization must perform a function during a disruption to normal operations and must continue performance during emergencies [@FCD2_2017, page B-2].
+When reviewing the list of organizational functions, an organization must first identify whether a function is essential or non-essential.
+The distinction between these two categories is whether or not an organization must perform a function during a disruption to normal operations and must continue performance during emergencies [@FCD2_2017, page B-2].
Essential functions are both important and urgent.
Functions that can be deferred until after an emergency are identified as non-essential.
For example, DoD defines MEFs in [DoD Directive 3020.26 DoD Continuity Policy](https://www.esd.whs.mil/Portals/54/Documents/DD/issuances/dodd/302026p.pdf) using similar terminology to [FCD-2](https://www.fema.gov/sites/default/files/2020-07/Federal_Continuity_Directive-2_June132017.pdf) [@dod3026_26_2018].
-As mission essential functions are most clearly defined for government agencies, stakeholders in other sectors may be familiar with different terms of art from continuity planning.
-For example, infrastructure providers in the US may better align with [National Critical Functions](https://www.cisa.gov/national-critical-functions).
-Private sector businesses may better align with [operational and financial impacts](https://www.ready.gov/sites/default/files/2020-03/business-impact-analysis-worksheet.pdf) in a [business continuity plan](https://www.ready.gov/business-continuity-plan).
+As mission essential functions are most clearly defined for government agencies, stakeholders in other sectors may be familiar with different terms of art from continuity planning.
+For example, infrastructure providers in the US may better align with [National Critical Functions](https://www.cisa.gov/national-critical-functions).
+Private sector businesses may better align with [operational and financial impacts](https://www.ready.gov/sites/default/files/2020-03/business-impact-analysis-worksheet.pdf) in a [business continuity plan](https://www.ready.gov/business-continuity-plan).
While the processes, terminology, and audience for these different frameworks differ, they all can provide a sense of the criticality of an asset or assets within the scope of the stakeholder conducting the cyber vulnerability prioritization with SSVC.
-In that sense they all function quite similarly within SSVC. Organizations should use whatever is most appropriate for their stakeholder context, with Mission Essential Function analysis serving as a fully worked example in the SSVC documents.
-
+In that sense they all function quite similarly within SSVC. Organizations should use whatever is most appropriate for their stakeholder context, with Mission Essential Function analysis serving as a fully worked example in the SSVC documents.
## Gathering Information About Mission Impact
-The factors that influence the mission impact level are diverse.
-This paper does not exhaustively discuss how a stakeholder should answer a question; that is a topic for future work.
-At a minimum, understanding mission impact should include gathering information about the critical paths that involve vulnerable components, viability of contingency measures, and resiliency of the systems that support the mission.
-There are various sources of guidance on how to gather this information; see for example the FEMA guidance in Continuity Directive 2 [@FCD2_2017] or OCTAVE FORTE [@tucker2018octave].
+The factors that influence the mission impact level are diverse.
+This paper does not exhaustively discuss how a stakeholder should answer a question; that is a topic for future work.
+At a minimum, understanding mission impact should include gathering information about the critical paths that involve vulnerable components, viability of contingency measures, and resiliency of the systems that support the mission.
+There are various sources of guidance on how to gather this information; see for example the FEMA guidance in Continuity Directive 2 [@FCD2_2017] or OCTAVE FORTE [@tucker2018octave].
This is part of risk management more broadly.
It should require the vulnerability management team to interact with more senior management to understand mission priorities and other aspects of risk mitigation.
diff --git a/docs/reference/decision_points/public_safety_impact.md b/docs/reference/decision_points/public_safety_impact.md
index 9943ddac..5564ac7f 100644
--- a/docs/reference/decision_points/public_safety_impact.md
+++ b/docs/reference/decision_points/public_safety_impact.md
@@ -16,9 +16,9 @@ This is a compound decision point, therefore it is a notational convenience.
Suppliers necessarily have a rather coarse-grained perspective on the broadly defined [Safety Impact](safety_impact.md) Decision Point.
Therefore we simplify the above into a binary categorization:
-- _Significant_ is when any impact meets the criteria for an impact of Marginal, Critical, or Catastrophic in the
+- *Significant* is when any impact meets the criteria for an impact of Marginal, Critical, or Catastrophic in the
[Safety Impact](safety_impact.md) table.
-- _Minimal_ is when none do.
+- *Minimal* is when none do.
## Prior Versions
diff --git a/docs/reference/decision_points/public_value_added.md b/docs/reference/decision_points/public_value_added.md
index ad5759a9..0284c0da 100644
--- a/docs/reference/decision_points/public_value_added.md
+++ b/docs/reference/decision_points/public_value_added.md
@@ -7,12 +7,11 @@ from ssvc.doc_helpers import example_block
print(example_block(LATEST))
```
-
-The intent of the definition is that one rarely if ever transitions from _limited_ to _ampliative_ or _ampliative_ to _precedence_.
-A vulnerability could transition from _precedence_ to _ampliative_ and _ampliative_ to _limited_.
+The intent of the definition is that one rarely if ever transitions from *limited* to *ampliative* or *ampliative* to *precedence*.
+A vulnerability could transition from *precedence* to *ampliative* and *ampliative* to *limited*.
That is, *Public Value Added* should only be downgraded through future iterations or re-evaluations.
This directionality is because once other organizations make something public, they cannot effectively un-publish it
(it'll be recorded and people will know about it, even if they take down a webpage).
-The rare case where *Public Value Added* increases would be if an organization published viable information, but
+The rare case where *Public Value Added* increases would be if an organization published viable information, but
then published additional misleading or obscuring information at a later time.
Then one might go from *limited* to *ampliative* in the interest of pointing to the better information.
diff --git a/docs/reference/decision_points/report_credibility.md b/docs/reference/decision_points/report_credibility.md
index 647360a1..acce744c 100644
--- a/docs/reference/decision_points/report_credibility.md
+++ b/docs/reference/decision_points/report_credibility.md
@@ -7,7 +7,6 @@ from ssvc.doc_helpers import example_block
print(example_block(LATEST))
```
-
An analyst should start with a presumption of credibility and proceed toward disqualification.
The reason for this is that, as a coordinator, occasionally doing a bit of extra work on a bad report is preferable to rejecting legitimate reports.
This is essentially stating a preference for false positives over false negatives with respect to credibility determination.
@@ -30,35 +29,35 @@ The indicators for or against are not commensurate, and so they cannot be put on
If neither of these confirmations are available, then the value of the [*Report Credibility*](#report-credibility) decision point depends on a balancing test among the following indicators.
**Indicators *for* Credibility** include:
-
- - The report is specific about what is affected
- - The report provides sufficient detail to reproduce the vulnerability.
- - The report describes an attack scenario.
- - The report suggests mitigations.
- - The report includes proof-of-concept exploit code or steps to reproduce.
- - Screenshots and videos, if provided, support the written text of the report and do not replace it.
- - The report neither exaggerates nor understates the impact.
+
+- The report is specific about what is affected
+- The report provides sufficient detail to reproduce the vulnerability.
+- The report describes an attack scenario.
+- The report suggests mitigations.
+- The report includes proof-of-concept exploit code or steps to reproduce.
+- Screenshots and videos, if provided, support the written text of the report and do not replace it.
+- The report neither exaggerates nor understates the impact.
**Indicators *against* Credibility** include:
- - The report is “spammy” or exploitative (for example, the report is an attempt to upsell the receiver on some product or service).
- - The report is vague or ambiguous about which vendors, products, or versions are affected (for example, the report claims that all “cell phones” or “wifi” or “routers” are affected).
- - The report is vague or ambiguous about the preconditions necessary to exploit the vulnerability.
- - The report is vague or ambiguous about the impact if exploited.
- - The report exaggerates the impact if exploited.
- - The report makes extraordinary claims without correspondingly extraordinary evidence (for example, the report claims that exploitation could result in catastrophic damage to some critical system without a clear causal connection between the facts presented and the impacts claimed).
- - The report is unclear about what the attacker gains by exploiting the vulnerability. What do they get that they didn't already have? For example, an attacker with system privileges can already do lots of bad things, so a report that assumes system privileges as a precondition to exploitation needs to explain what else this gives the attacker.
- - The report depends on preconditions that are extremely rare in practice, and lacks adequate evidence for why those preconditions might be expected to occur (for example, the vulnerability is only exposed in certain non-default configurations—unless there is evidence that a community of practice has established a norm of such a non-default setup).
- - The report claims dire impact for a trivially found vulnerability. It is not impossible for this to occur, but most products and services that have been around for a while have already had their low-hanging fruit major vulnerabilities picked. One notable exception would be if the reporter applied a completely new method for finding vulnerabilities to discover the subject of the report.
- - The report is rambling and is more about a narrative than describing the vulnerability. One description is that the report reads like a food recipe with the obligatory search engine optimization preamble.
- - The reporter is known to have submitted low-quality reports in the past.
- - The report conspicuously misuses technical terminology. This is evidence that the reporter may not understand what they are talking about.
- - The analyst's professional colleagues consider the report to be not credible.
- - The report consists of mostly raw tool output. Fuzz testing outputs are not vulnerability reports.
- - The report lacks sufficient detail for someone to reproduce the vulnerability.
- - The report is just a link to a video or set of images, or lacks written detail while claiming “it's all in the video”. Imagery should support a written description, not replace it.
- - The report describes a bug with no discernible security impact.
- - The report fails to describe an attack scenario, and none is obvious.
+- The report is “spammy” or exploitative (for example, the report is an attempt to upsell the receiver on some product or service).
+- The report is vague or ambiguous about which vendors, products, or versions are affected (for example, the report claims that all “cell phones” or “wifi” or “routers” are affected).
+- The report is vague or ambiguous about the preconditions necessary to exploit the vulnerability.
+- The report is vague or ambiguous about the impact if exploited.
+- The report exaggerates the impact if exploited.
+- The report makes extraordinary claims without correspondingly extraordinary evidence (for example, the report claims that exploitation could result in catastrophic damage to some critical system without a clear causal connection between the facts presented and the impacts claimed).
+- The report is unclear about what the attacker gains by exploiting the vulnerability. What do they get that they didn't already have? For example, an attacker with system privileges can already do lots of bad things, so a report that assumes system privileges as a precondition to exploitation needs to explain what else this gives the attacker.
+- The report depends on preconditions that are extremely rare in practice, and lacks adequate evidence for why those preconditions might be expected to occur (for example, the vulnerability is only exposed in certain non-default configurations—unless there is evidence that a community of practice has established a norm of such a non-default setup).
+- The report claims dire impact for a trivially found vulnerability. It is not impossible for this to occur, but most products and services that have been around for a while have already had their low-hanging fruit major vulnerabilities picked. One notable exception would be if the reporter applied a completely new method for finding vulnerabilities to discover the subject of the report.
+- The report is rambling and is more about a narrative than describing the vulnerability. One description is that the report reads like a food recipe with the obligatory search engine optimization preamble.
+- The reporter is known to have submitted low-quality reports in the past.
+- The report conspicuously misuses technical terminology. This is evidence that the reporter may not understand what they are talking about.
+- The analyst's professional colleagues consider the report to be not credible.
+- The report consists of mostly raw tool output. Fuzz testing outputs are not vulnerability reports.
+- The report lacks sufficient detail for someone to reproduce the vulnerability.
+- The report is just a link to a video or set of images, or lacks written detail while claiming “it's all in the video”. Imagery should support a written description, not replace it.
+- The report describes a bug with no discernible security impact.
+- The report fails to describe an attack scenario, and none is obvious.
We considered adding poor grammar or spelling as an indicator of non-credibility.
On further reflection, we do not recommend that poor grammar or spelling be used as an indicator of low report quality, as many reporters may not be native to the coordinator's language.
@@ -78,7 +77,5 @@ Furthermore, a report may be factual but not identify any security implications;
A coordinator also has a scope defined by their specific constituency and mission.
A report can be entirely credible yet remain out of scope for your coordination practice.
-Decide what to do about out of scope reports separately, before the vulnerability coordination triage decision begins.
+Decide what to do about out of scope reports separately, before the vulnerability coordination triage decision begins.
If a report arrives and would be out of scope even if true, there will be no need to proceed with judging its credibility.
-
-
diff --git a/docs/reference/decision_points/safety_impact.md b/docs/reference/decision_points/safety_impact.md
index 425dd7a0..1601cde4 100644
--- a/docs/reference/decision_points/safety_impact.md
+++ b/docs/reference/decision_points/safety_impact.md
@@ -7,7 +7,6 @@ from ssvc.doc_helpers import example_block
print(example_block(LATEST))
```
-
!!! tip "See also"
- Safety Impact combines with [Mission Impact](./mission_impact.md) to
@@ -46,7 +45,7 @@ If the stakeholder is contractually or legally responsible for safe operation of
For software used in a wide variety of sectors and deployments, the stakeholder may need to estimate an aggregate safety impact.
Aggregation suggests that the stakeholder’s response to this decision point cannot be less than the most severe credible safety impact, but we leave the specific aggregation method or function as a domain-specific extension for future work.
-### Gathering Information About Safety Impact
+## Gathering Information About Safety Impact
The factors that influence the safety impact level are diverse.
This paper does not exhaustively discuss how a stakeholder should answer a question; that is a topic for future work.
@@ -58,7 +57,6 @@ The decision values for safety impact are based on the hazard categories for air
To assign a value to [*Safety Impact*](safety_impact.md), at least one type of harm must reach that value. For example, for a [*Safety Impact*](safety_impact.md) of [*major*](safety_impact.md), at least one type of harm must reach [*major*](safety_impact.md) level.
All types of harm do not need to rise to the level of [*major*](safety_impact.md), just one type of harm does.
-
-
### Situated Safety Impact
Deployers are anticipated to have a more fine-grained perspective on the safety impacts broadly defined in *Safety Impact*.
We defer this topic for now because we combine it with [*Mission Impact*](mission_impact.md) to simplify implementation for deployers.
-
## Prior Versions
```python exec="true" idprefix=""
@@ -229,4 +225,3 @@ for version in versions:
print(example_block(version))
print("\n---\n")
```
-
diff --git a/docs/reference/decision_points/supplier_contacted.md b/docs/reference/decision_points/supplier_contacted.md
index def0c2b6..f75e1615 100644
--- a/docs/reference/decision_points/supplier_contacted.md
+++ b/docs/reference/decision_points/supplier_contacted.md
@@ -7,9 +7,6 @@ from ssvc.doc_helpers import example_block
print(example_block(LATEST))
```
-
!!! tip "Quality Contact Method"
A quality contact method is a publicly posted known good email address, public portal on vendor website, etc.
-
-
diff --git a/docs/reference/decision_points/system_exposure.md b/docs/reference/decision_points/system_exposure.md
index 4595895b..9a2f52dd 100644
--- a/docs/reference/decision_points/system_exposure.md
+++ b/docs/reference/decision_points/system_exposure.md
@@ -7,7 +7,6 @@ from ssvc.doc_helpers import example_block
print(example_block(LATEST))
```
-
Measuring the attack surface precisely is difficult, and we do not propose to perfectly delineate between small and controlled access.
Exposure should be judged against the system in its deployed context, which may differ from how it is commonly expected to be deployed.
For example, the exposure of a device on a vehicle's CAN bus will vary depending on the presence of a cellular telemetry device on the same bus.
@@ -18,7 +17,6 @@ Therefore, a deployer’s response to Exposure may change if such mitigations ar
If a mitigation changes exposure and thereby reduces the priority of a vulnerability, that mitigation can be considered a success.
Whether that mitigation allows the deployer to defer further action varies according to each case.
-
## Gathering Information About System Exposure
*System Exposure* is primarily used by Deployers, so the question is about whether some specific system is in fact exposed, not a hypothetical or aggregate question about systems of that type.
@@ -32,13 +30,14 @@ An analyst should also choose *open* for a phone or PC that connects to the web
Distinguishing between *small* and *controlled* is more nuanced.
If *open* has been ruled out, some suggested heuristics for differentiating the other two are as follows.
Apply these heuristics in order and stop when one of them applies.
- - If the system's networking and communication interfaces have been physically removed or disabled, choose *small*.
- - If [*Automatable*](automatable.md) is [*yes*](automatable.md), then choose *controlled*. The reasoning behind this heuristic is that if reconnaissance through exploitation is automatable, then the usual deployment scenario exposes the system sufficiently that access can be automated, which contradicts the expectations of *small*.
- - If the vulnerable component is on a network where other hosts can browse the web or receive email, choose *controlled*.
- - If the vulnerable component is in a third party library that is unreachable because the feature is unused in the surrounding product, choose *small*.
+
+- If the system's networking and communication interfaces have been physically removed or disabled, choose *small*.
+- If [*Automatable*](automatable.md) is [*yes*](automatable.md), then choose *controlled*. The reasoning behind this heuristic is that if reconnaissance through exploitation is automatable, then the usual deployment scenario exposes the system sufficiently that access can be automated, which contradicts the expectations of *small*.
+- If the vulnerable component is on a network where other hosts can browse the web or receive email, choose *controlled*.
+- If the vulnerable component is in a third party library that is unreachable because the feature is unused in the surrounding product, choose *small*.
The unreachable vulnerable component scenario may be a point of concern for stakeholders like patch suppliers who often find it more cost-effective to simply update the included library to an existing fixed version rather than try to explain to customers why the vulnerable code is unreachable in their own product.
-In those cases, we suggest the stakeholder reviews the decision outcomes of the tree to ensure the appropriate action is taken (paying attention to [_defer_](../../howto/supplier_tree.md) vs [_scheduled_](../../howto/supplier_tree.md), for example).
+In those cases, we suggest the stakeholder reviews the decision outcomes of the tree to ensure the appropriate action is taken (paying attention to [*defer*](../../howto/supplier_tree.md) vs [*scheduled*](../../howto/supplier_tree.md), for example).
If you have suggestions for further heuristics, or potential counterexamples to these, please describe the example and reasoning in an issue on the [SSVC GitHub](https://github.com/CERTCC/SSVC/issues).
diff --git a/docs/reference/decision_points/technical_impact.md b/docs/reference/decision_points/technical_impact.md
index 5fc482f1..4b1dcaf6 100644
--- a/docs/reference/decision_points/technical_impact.md
+++ b/docs/reference/decision_points/technical_impact.md
@@ -16,7 +16,6 @@ Our definition of **vulnerability** is based on the determination that some secu
We consider a security policy violation to be a technical impact—or at least, a security policy violation must have some technical instantiation.
Therefore, if there is a vulnerability then there must be some technical impact.
-
!!! tip "Gathering Information About Technical Impact"
Assessing *Technical Impact* amounts to assessing the degree of control over the vulnerable component the attacker stands to gain by exploiting the vulnerability.
@@ -33,4 +32,3 @@ Therefore, if there is a vulnerability then there must be some technical impact.
If you find a vulnerability that should have *total* *Technical Impact* but that does not answer yes to any of
these questions, please describe the example and what question we might add to this list in an issue on the
[SSVC GitHub](https://github.com/CERTCC/SSVC/issues).
-
diff --git a/docs/reference/decision_points/utility.md b/docs/reference/decision_points/utility.md
index 4779439f..1c465d41 100644
--- a/docs/reference/decision_points/utility.md
+++ b/docs/reference/decision_points/utility.md
@@ -12,7 +12,6 @@ print(example_block(LATEST))
Utility is a combination of [Automatable](./automatable.md) and
[Value Density](./value_density.md)
-
This is a compound decision point, therefore it is a notational convenience.
*Utility* estimates an adversary's benefit compared to their effort based on the assumption that they can exploit the vulnerability.
@@ -30,7 +29,6 @@ This framing makes it easier to analytically derive these categories from a desc
Roughly, *Utility* is a combination of two things: (1) the value of each exploitation event and (2) the ease and speed with which the adversary can cause exploitation events.
We define *Utility* as laborious, efficient, or super effective, as described in the table above.
-
## Alternative Utility Outputs
Alternative heuristics can plausibly be used as proxies for adversary utility.
@@ -45,8 +43,6 @@ Price does not only track the [*Value Density*](value_density.md) of the system,
Currently, we simplify the analysis and ignore these factors.
However, future work should look for and prevent large mismatches between the outputs of the *Utility* decision point and the exploit markets.
-
-
## Previous Versions
```python exec="true" idprefix=""
@@ -62,4 +58,4 @@ for version in versions:
!!! tip "See also"
Utility v1.0.0 was a combination of [Virulence](./automatable.md) and
- [Value Density](./value_density.md)
\ No newline at end of file
+ [Value Density](./value_density.md)
diff --git a/docs/reference/index.md b/docs/reference/index.md
index af7ff33b..26a2efc0 100644
--- a/docs/reference/index.md
+++ b/docs/reference/index.md
@@ -11,19 +11,18 @@
In this section, we provide reference documentation for SSVC.
We have organized the reference documentation into two main sections:
-
-- :material-arrow-decision-outline: [**Decision Points**](decision_points/index.md)
+- :material-arrow-decision-outline: [**Decision Points**](decision_points/index.md)
---
-
+
A list of all the decision points, values, and versions.
-- :material-language-python: [**Code Documentation**](code/index.md)
-
+- :material-language-python: [**Code Documentation**](code/index.md)
+
---
Documentation for the SSVC Python modules.
-
\ No newline at end of file
+
diff --git a/docs/ssvc-calc/README.md b/docs/ssvc-calc/README.md
index db57b28d..f4e9d740 100644
--- a/docs/ssvc-calc/README.md
+++ b/docs/ssvc-calc/README.md
@@ -1,20 +1,19 @@
-# Dryad
+# Dryad
+
Stakeholder-Specific Vulnerability Categorization Calculator
Dryad is a SSVC calculator app that guides you through the simple steps needed in making
a vulnerability priority decision. The result of applying SSVC is a priority decision,
providing you with a recommended action. See the demo in our [SSVC calc website](https://democert.org/ssvc/)
-Some examples of actions are
+Some examples of actions are
defer, scheduled, out-of-cycle, and immediate.
-* The top drop-down allows you to select from multiple decision trees that map to an appropriate Role in SSVC.
-* To explore the decision tree, use the button "Show Full Tree" This will show all the branches, nodes and edges that make up the decision tree. A small zoom control horizontal range slider that can help with very large decision trees.
-* A drop-down allows you to move from Graphic mode to Simple mode.
-* There are also a number of sample CVE in a dropdown that will auto-select a number of steps in the decision tree
-* Use the "Start Decision" to navigate the tree for assesing your prioritization for a vulnerability.
-* You can also import custom decision trees and custom CVE samples for the current decision tree.
-* There is a [data](../data/) folder where there is a number of examples both of schema and examples of exported outputs.
-* You can install this directory as a folder in your public website directory. and expose it. All referenced url's are relative in the scripts and HTML files.
-
-
+- The top drop-down allows you to select from multiple decision trees that map to an appropriate Role in SSVC.
+- To explore the decision tree, use the button "Show Full Tree" This will show all the branches, nodes and edges that make up the decision tree. A small zoom control horizontal range slider that can help with very large decision trees.
+- A drop-down allows you to move from Graphic mode to Simple mode.
+- There are also a number of sample CVE in a dropdown that will auto-select a number of steps in the decision tree
+- Use the "Start Decision" to navigate the tree for assesing your prioritization for a vulnerability.
+- You can also import custom decision trees and custom CVE samples for the current decision tree.
+- There is a [data](../data/) folder where there is a number of examples both of schema and examples of exported outputs.
+- You can install this directory as a folder in your public website directory. and expose it. All referenced url's are relative in the scripts and HTML files.
diff --git a/docs/ssvc-calc/index.md b/docs/ssvc-calc/index.md
index 2152c657..7e9df37f 100644
--- a/docs/ssvc-calc/index.md
+++ b/docs/ssvc-calc/index.md
@@ -1,4 +1,5 @@
# SSVC Calculator
+
), which is a good place to go to check on progress or help.
Plans for future work focus on further requirements gathering, analysis of types of risk, and further testing of the reliability of the decision process.
## Requirements Gathering via Sociological Research
@@ -10,9 +10,9 @@ Plans for future work focus on further requirements gathering, analysis of types
The community should know what users of a vulnerability prioritization system want.
To explore their needs, it is important to understand how people actually use CVSS and what they think it tells them.
In general, such empirical, grounded evidence about what practitioners and decision makers want from vulnerability scoring is lacking.
-We have based SSVC’s methodology on multiple decades of professional experience and myriad informal conversations with practitioners.
-Such evidence is not a bad place to start, but it does not lend itself to examination and validation by others.
-The purpose of understanding practitioner expectations is to inform what a vulnerability-prioritization methodology should actually provide by matching it to what people need or expect.
+We have based SSVC’s methodology on multiple decades of professional experience and myriad informal conversations with practitioners.
+Such evidence is not a bad place to start, but it does not lend itself to examination and validation by others.
+The purpose of understanding practitioner expectations is to inform what a vulnerability-prioritization methodology should actually provide by matching it to what people need or expect.
The method this future work should take is long-form, structured interviews.
We do not expect anyone to have access to enough consumers of CVSS to get statistically valid results out of a short survey, nor to pilot a long survey.
@@ -43,7 +43,6 @@ The “credible effects” to consider are those of all vulnerabilities remediat
How exactly to aggregate these different effects is not currently specified except to say that the unit of analysis is the whole work item.
Future work should provide some examples of how this holistic analysis of multiple vulnerabilities remediated in one patch should be conducted.
-
## Further Decision Tree Testing
More testing with diverse analysts is necessary before the decision trees are reliable. In this context, **reliable** means that two analysts, given the same vulnerability description and decision process description, will reach the same decision. Such reliability is important if scores and priorities are going to be useful. If they are not reliable, they will vary widely over time and among analysts. Such variability makes it impossible to tell whether a difference in scores is really due to one vulnerability being higher priority than other.
diff --git a/docs/topics/index.md b/docs/topics/index.md
index eff8e9f3..d1e73a6d 100644
--- a/docs/topics/index.md
+++ b/docs/topics/index.md
@@ -11,7 +11,6 @@
[SSVC How-To](../howto/index.md) provides practical guidance for implementing SSVC in your organization.
For technical reference, see [Reference](../reference/index.md).
-
This documentation defines a testable Stakeholder-Specific Vulnerability Categorization (SSVC) for prioritizing actions during vulnerability management.
The stakeholders in vulnerability management are diverse.
This diversity must be accommodated in the main functionality, rather than squeezed into hard-to-use optional features.
@@ -23,7 +22,6 @@ As such, the modeling framework is important but difficult to pin down.
We approach this problem as a satisficing process.
We do not seek optimal formalisms, but an adequate formalism.
-
## Key Concepts in SSVC Decision Models
SSVC models individual vulnerability management decisions. It is built around the following concepts:
@@ -34,14 +32,13 @@ SSVC models individual vulnerability management decisions. It is built around th
are an ordered set of enumerated values. They are ordered because they are sortable in some dimension, usually
having to do with priority or urgency. They are enumerated because they are finite and discrete.
- **Outcomes** are the dependent variables that are relevant to the decision. Each outcome represents a different
- possible result of the decision.
-- **Outcome Values** are the possible values for an Outcome. Outcomes are similarly defined as an ordered set of
+ possible result of the decision.
+- **Outcome Values** are the possible values for an Outcome. Outcomes are similarly defined as an ordered set of
enumerated values, usually indicating a priority or urgency.
- A **Policy** is a mapping from each combination of decision point values to the set of outcome values.
- A **Decision Function** is a function that accepts a set of decision point values and returns an outcome value based
on a policy.
-
```mermaid
---
title: Decision Points and Values
@@ -69,7 +66,6 @@ flowchart LR
Policy --> Outcomes
```
-
!!! question "Where do the trees come in?"
Our initial concept for SSVC's decision modeling was based on decision trees.
@@ -89,7 +85,7 @@ flowchart LR
convenient way to visualize the decision function, but they are not a requirement of the model.
## Topics Overview
-
+
The remainder of this section is organized as follows:
diff --git a/docs/topics/information_sources.md b/docs/topics/information_sources.md
index 2e080f6f..3ed11424 100644
--- a/docs/topics/information_sources.md
+++ b/docs/topics/information_sources.md
@@ -21,7 +21,6 @@ Although the lists are all different, we expect they are all valid information s
We are not aware of a comparative study of the different lists of active exploits; however, we expect they have similar properties to block lists of network touchpoints [@metcalf2015blocklist] and malware [@kuhrer2014paint].
Namely, each list has a different view and vantage on the problem, which makes them appear to be different, but each list accurately represents its particular vantage at a point in time.
-
## System Exposure
[*System Exposure*](../reference/decision_points/system_exposure.md) could be informed by the various scanning platforms such as Shodan and Shadowserver.
@@ -30,6 +29,7 @@ Such scans do not find all [*open*](../reference/decision_points/system_exposure
Scanning software, such as the open-source tool Nessus, could be used to scan for connectivity inside an organization to catalogue what devices should be scored [*controlled*](../reference/decision_points/system_exposure.md) if, say, the scan finds them on an internal network where devices regularly connect to the Internet.
---
+
## Adapting other Information Sources
Some information sources that were not designed with SSVC in mind can be adapted to work with it.
@@ -54,16 +54,16 @@ The interpretation is different for CVSS version 3 than version 4.
That is, if the vulnerability leads to a high impact on the confidentiality and integrity of the vulnerable system, then that is equivalent to total technical impact on the system.
-The following considerations are accounted for in this recommendation.
+The following considerations are accounted for in this recommendation.
1. A denial of service condition is modeled as a *partial* [*Technical Impact*](../reference/decision_points/technical_impact.md).
Therefore, a high availability impact to the vulnerable system should not be mapped to *total* [*Technical Impact*](../reference/decision_points/technical_impact.md) on its own.
-2. There may be situations in which a high confidentiality impact is sufficient for total technical impact; for example, disclosure of the root or administrative password for the system leads to total technical control of the system.
-So this suggested mapping is a useful heuristic, but there may be exceptions, depending on exactly what the CVSS v4 metric value assignment norms are and become for these situations.
+2. There may be situations in which a high confidentiality impact is sufficient for total technical impact; for example, disclosure of the root or administrative password for the system leads to total technical control of the system.
+So this suggested mapping is a useful heuristic, but there may be exceptions, depending on exactly what the CVSS v4 metric value assignment norms are and become for these situations.
3. While the Subsequent System impact metric group in CVSS v4 is useful, those concepts are not captured by [*Technical Impact*](../reference/decision_points/technical_impact.md).
-Subsequent System impacts are captured, albeit in different framings, by decision points such as [*Situated Safety Impact*](../reference/decision_points/safety_impact.md), [*Mission Impact*](../reference/decision_points/mission_impact.md), and [*Value Density*](../reference/decision_points/value_density.md).
-There is not a direct mapping between the subsequent system impact metric group and these decision points, except in the case of [*Public Safety Impact*](../reference/decision_points/public_safety_impact.md) and the CVSS v4 environmental metrics for Safety Impact in the subsequent system metric group.
-In that case, both definitions map back to the same safety impact standard for definitions (IEC 61508) and so are easily mapped to each other.
+Subsequent System impacts are captured, albeit in different framings, by decision points such as [*Situated Safety Impact*](../reference/decision_points/safety_impact.md), [*Mission Impact*](../reference/decision_points/mission_impact.md), and [*Value Density*](../reference/decision_points/value_density.md).
+There is not a direct mapping between the subsequent system impact metric group and these decision points, except in the case of [*Public Safety Impact*](../reference/decision_points/public_safety_impact.md) and the CVSS v4 environmental metrics for Safety Impact in the subsequent system metric group.
+In that case, both definitions map back to the same safety impact standard for definitions (IEC 61508) and so are easily mapped to each other.
#### CVSS v3 and Technical Impact
@@ -72,10 +72,10 @@ For CVSS v3, the impact metric group cannot be directly mapped to [*Technical Im
If the CVSS version 3 value of “Scope” is “Unchanged,” then the recommendation is the same as that for CVSS v4, above, as the impact metric group is information exclusively about the vulnerable system.
If the CVSS version 3 value of “Scope” is “Changed,” then the impact metrics may be about either the vulnerable system or the subsequent systems, based on whichever makes the final score higher.
Since [*Technical Impact*](../reference/decision_points/technical_impact.md) is based only on the vulnerable system impacts, if "Scope" is "Changed" then the ambiguity between vulnerable and subsequent system impacts is not documented in the vector string.
-This ambiguity makes it impossible to cleanly map the [*Technical Impact*](../reference/decision_points/technical_impact.md) value in this case.
+This ambiguity makes it impossible to cleanly map the [*Technical Impact*](../reference/decision_points/technical_impact.md) value in this case.
!!! tip "Mapping CVSS v3 to Technical Impact"
-
+
Summarizing the discussion above, the mapping between CVSS v3 and [*Technical Impact*](../reference/decision_points/technical_impact.md) is
| CVSS Scope | Confidentiality
(C) | Integrity
(I) | Availability
(A) | [*Technical Impact*](../reference/decision_points/technical_impact.md) |
@@ -85,7 +85,6 @@ This ambiguity makes it impossible to cleanly map the [*Technical Impact*](../re
| Unchanged | Low (L) or None (N) | High (H) | *any* | Partial |
| Changed | *any* | *any* | *any* | (ambiguous) |
-
### CWE and Exploitation
As mentioned in the discussion of [*Exploitation*](../reference/decision_points/exploitation.md), [CWE](https://cwe.mitre.org/) could be used to inform one of the conditions that satisfy [*proof of concept*](../reference/decision_points/exploitation.md).
diff --git a/docs/topics/items_with_same_priority.md b/docs/topics/items_with_same_priority.md
index 87641842..e1b3f661 100644
--- a/docs/topics/items_with_same_priority.md
+++ b/docs/topics/items_with_same_priority.md
@@ -31,5 +31,3 @@ The priority is equivalent.
fine-grained priorities within qualitative categories anyway.
With our system, organizations can be more deliberate about conveniently organizing work that is of equivalent priority.
-
-
diff --git a/docs/topics/limitations.md b/docs/topics/limitations.md
index c6b365fd..2b2d0bf5 100644
--- a/docs/topics/limitations.md
+++ b/docs/topics/limitations.md
@@ -25,16 +25,16 @@ This is not a calculation of any kind, just an assignment of a label which may m
Of course, these labels are dangerous, as they may be misused as numbers.
Therefore, we prefer the use *defer*, *scheduled*, etc., as listed in
[Enumerating Vulnerability Management Actions](../howto/deployer_tree.md).
-
+
## Expanded Context
We incorporated a wider variety of inputs from contexts beyond the affected component.
Some organizations are not prepared or configured to reliably produce such data (e.g., around mission impact or safety impact). There is adequate guidance for how to elicit and curate this type information from various risk management frameworks, including OCTAVE [@caralli2007octave]. Not every organization is going to have sufficiently mature risk management functions to apply SSVC.\
-
+
This second limitation should be approached with two strategies:
1. Organizations should be encouraged and enabled to mature their risk management capabilities
-2. In the meantime, organizations such as NIST could consider developing default advice.
+2. In the meantime, organizations such as NIST could consider developing default advice.
The most practical framing of this approach might be for the NIST NVD to produce scores from the perspective of a
- new stakeholder—something like “national security” or “public well-being” that is explicitly a sort of default
+ new stakeholder—something like “national security” or “public well-being” that is explicitly a sort of default
advice for otherwise uninformed organizations that can then explicitly account for national priorities, such as critical infrastructure.
diff --git a/docs/topics/related_systems.md b/docs/topics/related_systems.md
index 4a4ed6ee..16c71596 100644
--- a/docs/topics/related_systems.md
+++ b/docs/topics/related_systems.md
@@ -107,7 +107,6 @@ CVSS is one-size-fits-all by design.
These customization efforts struggle with adapting CVSS because it was not designed to be adaptable to different stakeholder considerations.
The SSVC section [Tree Construction and Customization Guidance](../howto/tree_customization.md) explains how stakeholders or stakeholder communities can adapt SSVC in a reliable way that still promotes repeatability and communication.
-
## vPrioritizer
vPrioritizer is an open-source project that attempts to integrate asset management and vulnerablity prioritization.
@@ -118,5 +117,3 @@ In that sense, it is compatible with any of methods mentioned above or SSVC.
However, SSVC would be better suited to address vPrioritizer's broad spectrum asset management data.
For example, vPrioritizer aims to collect data points on topics such as asset significance.
Asset significance could be expressed through the SSVC decision points of [*Mission Impact*](../reference/decision_points/mission_impact.md) and situated [*Well-being Impact*](../reference/decision_points/human_impact.md), but it does not have a ready expression in CVSS, EPSS, or VPR.
-
-
diff --git a/docs/topics/representing_information.md b/docs/topics/representing_information.md
index 0d09753f..d5c6f471 100644
--- a/docs/topics/representing_information.md
+++ b/docs/topics/representing_information.md
@@ -1,7 +1,7 @@
# Representing Information for Decisions About Vulnerabilities
We propose that decisions about vulnerabilities—rather than their severity—are a more useful approach.
-Our design goals for the decision-making process are to
+Our design goals for the decision-making process are to
- clearly define whose decisions are involved
- properly use evidentiary categories
@@ -35,23 +35,22 @@ Therefore, under a Gaussian error distribution, 8.9 is really 60\% high and 40\%
SSVC decisions should be distinct and crisp, without such statistical overlaps.
We avoid numerical representations for either inputs or outputs of a vulnerability management decision process.
-Quantified metrics are more useful when
+Quantified metrics are more useful when
-1. data for decision making is available, and
+1. data for decision making is available, and
2. the stakeholders agree on how to measure.
Vulnerability management does not yet meet either criterion.
Furthermore, it is not clear to what extent measurements about a vulnerability can be informative about other vulnerabilities.
Each vulnerability has a potentially unique relationship to the socio-technical system in which it exists, including the Internet.
-
## Be Based on Reliably Available Evidence
Vulnerability management decisions are often contextual: given what is known at the time, the decision is to do X.
But what is known can change over time, which can and should influence the decision.
The context of the vulnerability, and the systems it impacts, are inextricably linked to managing it.
Some information about the context will be relatively static over time, such as the contribution of a system to an organization's mission.
-Other information can change rapidly as events occur, such as the public release of an exploit or observation of attacks.
+Other information can change rapidly as events occur, such as the public release of an exploit or observation of attacks.
Temporal and environmental considerations should be primary, not optional as they are in CVSS.
We discuss the temporal aspects further in [Information Changes over Time](../howto/bootstrap/use.md).
@@ -65,9 +64,9 @@ Transparency should improve trust in the results.
Finally, any result of a decision-making process should be **explainable**
Explainable is defined and used with its common meaning, not as it is used in the research area of explainable artificial intelligence.
An explanation should make the process intelligible to an interested, competent, non-expert person.
-There are at least two reasons common explainability is important:
+There are at least two reasons common explainability is important:
-1. for troubleshooting and error correction and
+1. for troubleshooting and error correction and
2. for justifying proposed decisions.
## Summary
@@ -75,17 +74,16 @@ There are at least two reasons common explainability is important:
To summarize, the following are our design goals for a vulnerability
management process:
- - Outputs are decisions.
+- Outputs are decisions.
- - Pluralistic recommendations are made among a manageable number of
+- Pluralistic recommendations are made among a manageable number of
stakeholder groups.
- - Inputs are qualitative.
+- Inputs are qualitative.
- - Outputs are qualitative, and there are no (unjustified) shifts to
+- Outputs are qualitative, and there are no (unjustified) shifts to
quantitative calculations.
- - Process justification is transparent.
-
- - Results are explainable.
+- Process justification is transparent.
+- Results are explainable.
diff --git a/docs/topics/risk_tolerance_and_priority.md b/docs/topics/risk_tolerance_and_priority.md
index 631be2dc..e738e411 100644
--- a/docs/topics/risk_tolerance_and_priority.md
+++ b/docs/topics/risk_tolerance_and_priority.md
@@ -21,11 +21,10 @@ A successful vulnerability management practice must balance at least two risks:
problems that could arise from making changes to production systems.
2. **Vulnerability risk**: the potential costs of incidents resulting from exploitation of vulnerable systems
-
In developing the decision trees in this document, we had in mind stakeholders with a moderate tolerance for risk. The resulting trees reflect that assumption. Organizations may of course be more or less conservative in their own vulnerability management practices, and we cannot presume to determine how an organization should balance their risk.
We therefore remind our readers that the labels on the trees (defer, immediate, etc.) can and should be customized to
-suit the needs of individual stakeholders wherever necessary and appropriate.
+suit the needs of individual stakeholders wherever necessary and appropriate.
---
@@ -37,5 +36,3 @@ suit the needs of individual stakeholders wherever necessary and appropriate.
the most urgent response.
- On the other hand, an organization with a high aversion to vulnerability risk could elevate the priority of many
branches to ensure fixes are deployed quickly.
-
-
diff --git a/docs/topics/state_of_practice.md b/docs/topics/state_of_practice.md
index e2009777..a9c0997e 100644
--- a/docs/topics/state_of_practice.md
+++ b/docs/topics/state_of_practice.md
@@ -1,5 +1,4 @@
-
# Current state of practice
**Vulnerability management** covers “the discovery, analysis, and handling of new or reported security vulnerabilities in information systems \[and\] the detection of and response to known vulnerabilities in order to prevent them from being exploited” [@csirtservices_v2].
diff --git a/docs/topics/vulnerability_management_decisions.md b/docs/topics/vulnerability_management_decisions.md
index 9744d8fd..6edc2258 100644
--- a/docs/topics/vulnerability_management_decisions.md
+++ b/docs/topics/vulnerability_management_decisions.md
@@ -8,4 +8,3 @@ The “what” is about the scope, both in how the affected system is defined an
While we strive to make our examples realistic, we invite the community to engage and conduct empirical assessments to test them.
The following construction should be treated as an informed hypothesis rather than a conclusion.
-
diff --git a/docs/topics/worked_example.md b/docs/topics/worked_example.md
index 21542a34..ba00739b 100644
--- a/docs/topics/worked_example.md
+++ b/docs/topics/worked_example.md
@@ -43,7 +43,7 @@ use its installation to remotely identify targets.
However, since most of the hospital’s clients have not installed the app, and for nearly all cases, physical proximity
to the device is necessary; therefore, we select [*small*](../reference/decision_points/system_exposure.md) and move on to ask about mission impact.
-According to the fictional pilot scenario,
+According to the fictional pilot scenario,
> Our mission dictates that the first and foremost priority is to contribute to human welfare and to uphold the Hippocratic oath (do no harm).
diff --git a/docs/tutorials/index.md b/docs/tutorials/index.md
index ba3c6f09..802ff565 100644
--- a/docs/tutorials/index.md
+++ b/docs/tutorials/index.md
@@ -44,9 +44,6 @@ SSVC can be used in conjunction with other tools and methodologies to help prior
This information can be used to inform the [Exploitation](../reference/decision_points/exploitation.md) decision point in the
[Supplier](../howto/supplier_tree.md), [Deployer](../howto/deployer_tree.md), and [Coordinator Publication](../howto/publication_decision.md) decision models.
-
-
-
## Videos
Provided below are videos that provide an overview of SSVC and the implementation of decision models.
@@ -71,6 +68,6 @@ We've collected a list of articles and blog posts that provide additional inform
| SEI | [Prioritizing Vulnerability Response with a Stakeholder-Specific Vulnerability Categorization](https://insights.sei.cmu.edu/blog/prioritizing-vulnerability-response-with-a-stakeholder-specific-vulnerability-categorization/) |
| CISA | [Stakeholder-Specific Vulnerability Categorization (SSVC)](https://www.cisa.gov/stakeholder-specific-vulnerability-categorization-ssvc) |
| Qualys | [Effective Vulnerability Management with Stakeholder Specific Vulnerability Categorization (SSVC) and Qualys TruRisk](https://blog.qualys.com/product-tech/2022/11/30/effective-vulnerability-management-with-ssvc-and-qualys-trurisk) |
-| Vulcan Cyber | [The SSVC risk prioritization method: what it is, when to use it, and alternatives](https://vulcan.io/blog/the-ssvc-risk-prioritization-method-what-it-is-when-to-use-it-and-alternatives/) |
+| Vulcan Cyber | [The SSVC risk prioritization method: what it is, when to use it, and alternatives](https://vulcan.io/blog/the-ssvc-risk-prioritization-method-what-it-is-when-to-use-it-and-alternatives/) |
-Have a link to something we missed? Let us know in an [issue](https://github.com/CERTCC/SSVC/issues/new).
\ No newline at end of file
+Have a link to something we missed? Let us know in an [issue](https://github.com/CERTCC/SSVC/issues/new).
diff --git a/src/README.md b/src/README.md
index 84b8f226..e90e39d1 100644
--- a/src/README.md
+++ b/src/README.md
@@ -7,13 +7,14 @@ This directory holds helper scripts that can make managing or using SSVC easier.
This python script takes a CSV of the format in the `../data` directory and gets you (most of the way) to a pretty decision tree visualization. It creates a LaTeX file that can create a PDF (and from there, a PNG or whatever you want).
`python SSVC_csv-to-latex.py --help` works and should explain all your options.
-When the script finishes, it will also print a message with instructions for creating the PDF or PNG from the tex. A potential future improvement is to call `latexmk` directly from the python script.
+When the script finishes, it will also print a message with instructions for creating the PDF or PNG from the tex. A potential future improvement is to call `latexmk` directly from the python script.
Example usage:
+
```
python SSVC_csv-to-latex.py --input=../data/ssvc_2_deployer_simplified.csv --output=tmp.tex --delim="," --columns="0,2,1" --label="3" --header-row --priorities="defer, scheduled, out-of-cycle, immediate"
```
Dependencies: LaTeX.
-To install latex, see https://www.latex-project.org/get/
-`latexmk` is a helper script that is not included in all distributions by default; if you need it, see https://ctan.org/pkg/latexmk/?lang=en
+To install latex, see
+`latexmk` is a helper script that is not included in all distributions by default; if you need it, see