Skip to content

Commit c25b832

Browse files
nakanohnakano-h
andauthored
fix: typo in SECURITY.md (practicies -> practices) (#31509)
**Description:** Fixes a typo in SECURITY.md ("practicies" → "practices"). Note: This PR also unifies apostrophe usage (’ → '). **Issue:** N/A **Dependencies:** None **Twitter handle:** N/A Co-authored-by: 中野 博文 <[email protected]>
1 parent 35ae5ea commit c25b832

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

SECURITY.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -7,8 +7,8 @@ LangChain has a large ecosystem of integrations with various external resources
77
When building such applications developers should remember to follow good security practices:
88

99
* [**Limit Permissions**](https://en.wikipedia.org/wiki/Principle_of_least_privilege): Scope permissions specifically to the application's need. Granting broad or excessive permissions can introduce significant security vulnerabilities. To avoid such vulnerabilities, consider using read-only credentials, disallowing access to sensitive resources, using sandboxing techniques (such as running inside a container), specifying proxy configurations to control external requests, etc. as appropriate for your application.
10-
* **Anticipate Potential Misuse**: Just as humans can err, so can Large Language Models (LLMs). Always assume that any system access or credentials may be used in any way allowed by the permissions they are assigned. For example, if a pair of database credentials allows deleting data, its safest to assume that any LLM able to use those credentials may in fact delete data.
11-
* [**Defense in Depth**](https://en.wikipedia.org/wiki/Defense_in_depth_(computing)): No security technique is perfect. Fine-tuning and good chain design can reduce, but not eliminate, the odds that a Large Language Model (LLM) may make a mistake. Its best to combine multiple layered security approaches rather than relying on any single layer of defense to ensure security. For example: use both read-only permissions and sandboxing to ensure that LLMs are only able to access data that is explicitly meant for them to use.
10+
* **Anticipate Potential Misuse**: Just as humans can err, so can Large Language Models (LLMs). Always assume that any system access or credentials may be used in any way allowed by the permissions they are assigned. For example, if a pair of database credentials allows deleting data, it's safest to assume that any LLM able to use those credentials may in fact delete data.
11+
* [**Defense in Depth**](https://en.wikipedia.org/wiki/Defense_in_depth_(computing)): No security technique is perfect. Fine-tuning and good chain design can reduce, but not eliminate, the odds that a Large Language Model (LLM) may make a mistake. It's best to combine multiple layered security approaches rather than relying on any single layer of defense to ensure security. For example: use both read-only permissions and sandboxing to ensure that LLMs are only able to access data that is explicitly meant for them to use.
1212

1313
Risks of not doing so include, but are not limited to:
1414
* Data corruption or loss.
@@ -39,7 +39,7 @@ Before reporting a vulnerability, please review:
3939

4040
1) In-Scope Targets and Out-of-Scope Targets below.
4141
2) The [langchain-ai/langchain](https://python.langchain.com/docs/contributing/repo_structure) monorepo structure.
42-
3) The [Best practicies](#best-practices) above to
42+
3) The [Best practices](#best-practices) above to
4343
understand what we consider to be a security vulnerability vs. developer
4444
responsibility.
4545

0 commit comments

Comments
 (0)