You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+2Lines changed: 2 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,6 +8,8 @@ The selling point of this paper is extremely low extra parameters per added conc
8
8
9
9
It seems they successfully applied the Rank-1 editing technique from a <ahref="https://arxiv.org/abs/2202.05262">memory editing paper for LLM</a>, with a few improvements. They also identified that the keys determine the "where" of the new concept, while the values determine the "what", and propose local / global-key locking to a superclass concept (while learning the values).
10
10
11
+
For researchers out there, if this paper checks out, the tools in this repository should work for any other text-to-`<insert modality>` network using cross attention conditioning. Just a thought
12
+
11
13
## Appreciation
12
14
13
15
- <ahref="https://stability.ai/">StabilityAI</a> for the generous sponsorship, as well as my other sponsors out there
0 commit comments