Skip to content

Commit 799a1a3

Browse files
authored
Merge pull request #1456 from PAIR-code/dev
Merge dev into main
2 parents 72e8479 + e1f5756 commit 799a1a3

File tree

10 files changed

+75
-31
lines changed

10 files changed

+75
-31
lines changed

.github/workflows/ci.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ jobs:
3333
strategy:
3434
matrix:
3535
node-version: [18]
36-
python-version: ["3.10"]
36+
python-version: ["3.10", "3.11"]
3737
defaults:
3838
run:
3939
shell: bash -l {0}

docs/documentation/_sources/components.md.txt

Lines changed: 18 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -460,20 +460,34 @@ The UI supports multiple options for analysis, including:
460460

461461
For a walkthrough of how to use sequence salience to debug LLMs, check out the
462462
Responsible Generative AI Toolkit at
463-
https://ai.google.dev/responsible/model_behavior.
463+
https://ai.google.dev/responsible/model_behavior and for more on design of the
464+
system see our paper at https://arxiv.org/abs/2404.07498.
465+
466+
If you find this useful in your work, please cite Sequence Salience as:
467+
468+
```
469+
@article{tenney2024interactive,
470+
title={Interactive Prompt Debugging with Sequence Salience},
471+
author={Tenney, Ian and Mullins, Ryan and Du, Bin and Pandya, Shree and Kahng, Minsuk and Dixon, Lucas},
472+
journal={arXiv preprint arXiv:2404.07498},
473+
year={2024}
474+
}
475+
```
464476

465477
**Code:**
466478

467-
* LIT-for-Gemma Colab: [`lit_gemma.ipynb`](https://colab.research.google.com/github/google/generative-ai-docs/blob/main/site/en/gemma/docs/lit_gemma.ipynb)
479+
Currently, this works out-of-the-box with Gemma, Llama 2, Mistral, and GPT-2,
480+
using either KerasNLP or Transformers.
481+
482+
* LIT-for-Gemma Colab:
483+
[`lit_gemma.ipynb`](https://colab.research.google.com/github/google/generative-ai-docs/blob/main/site/en/gemma/docs/lit_gemma.ipynb)
468484
* Demo binary:
469485
[`lm_salience_demo.py`](https://github.com/PAIR-code/lit/blob/main/lit_nlp/examples/lm_salience_demo.py)
470486
* KerasNLP model wrappers:
471487
[`instrumented_keras_lms.py`](https://github.com/PAIR-code/lit/blob/main/lit_nlp/examples/models/instrumented_keras_lms.py)
472488
* Transformers model wrappers:
473489
[`pretrained_lms.py`](https://github.com/PAIR-code/lit/blob/main/lit_nlp/examples/models/pretrained_lms.py)
474490

475-
Currently, this works out-of-the-box
476-
with Gemma models (using Keras) as well as with GPT-2.
477491

478492

479493
## Salience Clustering

docs/documentation/components.html

Lines changed: 15 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -637,19 +637,30 @@ <h2>Sequence Salience<a class="headerlink" href="#sequence-salience" title="Link
637637
</ul>
638638
<p>For a walkthrough of how to use sequence salience to debug LLMs, check out the
639639
Responsible Generative AI Toolkit at
640-
<a class="reference external" href="https://ai.google.dev/responsible/model_behavior">https://ai.google.dev/responsible/model_behavior</a>.</p>
640+
<a class="reference external" href="https://ai.google.dev/responsible/model_behavior">https://ai.google.dev/responsible/model_behavior</a> and for more on design of the
641+
system see our paper at <a class="reference external" href="https://arxiv.org/abs/2404.07498">https://arxiv.org/abs/2404.07498</a>.</p>
642+
<p>If you find this useful in your work, please cite Sequence Salience as:</p>
643+
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="nd">@article</span><span class="p">{</span><span class="n">tenney2024interactive</span><span class="p">,</span>
644+
<span class="n">title</span><span class="o">=</span><span class="p">{</span><span class="n">Interactive</span> <span class="n">Prompt</span> <span class="n">Debugging</span> <span class="k">with</span> <span class="n">Sequence</span> <span class="n">Salience</span><span class="p">},</span>
645+
<span class="n">author</span><span class="o">=</span><span class="p">{</span><span class="n">Tenney</span><span class="p">,</span> <span class="n">Ian</span> <span class="ow">and</span> <span class="n">Mullins</span><span class="p">,</span> <span class="n">Ryan</span> <span class="ow">and</span> <span class="n">Du</span><span class="p">,</span> <span class="n">Bin</span> <span class="ow">and</span> <span class="n">Pandya</span><span class="p">,</span> <span class="n">Shree</span> <span class="ow">and</span> <span class="n">Kahng</span><span class="p">,</span> <span class="n">Minsuk</span> <span class="ow">and</span> <span class="n">Dixon</span><span class="p">,</span> <span class="n">Lucas</span><span class="p">},</span>
646+
<span class="n">journal</span><span class="o">=</span><span class="p">{</span><span class="n">arXiv</span> <span class="n">preprint</span> <span class="n">arXiv</span><span class="p">:</span><span class="mf">2404.07498</span><span class="p">},</span>
647+
<span class="n">year</span><span class="o">=</span><span class="p">{</span><span class="mi">2024</span><span class="p">}</span>
648+
<span class="p">}</span>
649+
</pre></div>
650+
</div>
641651
<p><strong>Code:</strong></p>
652+
<p>Currently, this works out-of-the-box with Gemma, Llama 2, Mistral, and GPT-2,
653+
using either KerasNLP or Transformers.</p>
642654
<ul class="simple">
643-
<li><p>LIT-for-Gemma Colab: <a class="reference external" href="https://colab.research.google.com/github/google/generative-ai-docs/blob/main/site/en/gemma/docs/lit_gemma.ipynb"><code class="docutils literal notranslate"><span class="pre">lit_gemma.ipynb</span></code></a></p></li>
655+
<li><p>LIT-for-Gemma Colab:
656+
<a class="reference external" href="https://colab.research.google.com/github/google/generative-ai-docs/blob/main/site/en/gemma/docs/lit_gemma.ipynb"><code class="docutils literal notranslate"><span class="pre">lit_gemma.ipynb</span></code></a></p></li>
644657
<li><p>Demo binary:
645658
<a class="reference external" href="https://github.com/PAIR-code/lit/blob/main/lit_nlp/examples/lm_salience_demo.py"><code class="docutils literal notranslate"><span class="pre">lm_salience_demo.py</span></code></a></p></li>
646659
<li><p>KerasNLP model wrappers:
647660
<a class="reference external" href="https://github.com/PAIR-code/lit/blob/main/lit_nlp/examples/models/instrumented_keras_lms.py"><code class="docutils literal notranslate"><span class="pre">instrumented_keras_lms.py</span></code></a></p></li>
648661
<li><p>Transformers model wrappers:
649662
<a class="reference external" href="https://github.com/PAIR-code/lit/blob/main/lit_nlp/examples/models/pretrained_lms.py"><code class="docutils literal notranslate"><span class="pre">pretrained_lms.py</span></code></a></p></li>
650663
</ul>
651-
<p>Currently, this works out-of-the-box
652-
with Gemma models (using Keras) as well as with GPT-2.</p>
653664
</section>
654665
<section id="salience-clustering">
655666
<h2>Salience Clustering<a class="headerlink" href="#salience-clustering" title="Link to this heading">#</a></h2>

docs/documentation/searchindex.js

Lines changed: 1 addition & 1 deletion
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

docs/tutorials/sequence-salience/index.html

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -112,10 +112,11 @@ <h2>Prompt Engineering with Sequence Salience</h2>
112112
LIT supports additional LLMs, including <a href="https://llama.meta.com/">Llama 2</a> and <a href="https://mistral.ai/news/announcing-mistral-7b/">Mistral</a>,
113113
via the HuggingFace Transformers and KerasNLP libraries.</p>
114114
<p>This tutorial was adapted from and expands upon LIT's contributions to the
115-
<a href="https://ai.google.dev/responsible">Responsible Generative AI Tookit</a> and the related paper and
116-
<a href="https://youtu.be/EZgUlnWdh0w">video</a> submitted to the ACL 2024 Systems Demonstration track.
117-
This is an active and ongoing research area for the LIT team, so expect changes
118-
and further expansions to this tutorial over time.</p>
115+
<a href="https://ai.google.dev/responsible">Responsible Generative AI Tookit</a> and the related
116+
<a href="https://arxiv.org/abs/2404.07498">paper</a> and <a href="https://youtu.be/EZgUlnWdh0w">video</a> submitted to the ACL 2024
117+
System Demonstrations track. This is an active and ongoing research area for
118+
the LIT team, so expect changes and further expansions to this tutorial over
119+
time.</p>
119120
<h2>Case Study 1: Debugging Few-Shot Prompts</h2>
120121
<p>Few-shot prompting was introduced with <a href="https://cdn.openai.com/better-language-models/language-models.pdf">GPT-2</a>: an ML developer provides
121122
examples of how to perform a task in a prompt, affixes user-provided content at

lit_nlp/examples/models/tfx_model_test.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,8 @@ def setUp(self):
1212
super(TfxModelTest, self).setUp()
1313
self._path = tempfile.mkdtemp()
1414
input_layer = tf.keras.layers.Input(
15-
shape=(1), dtype=tf.string, name='example')
15+
shape=(1,), dtype=tf.string, name='example'
16+
)
1617
parsed_input = tf.io.parse_example(
1718
tf.reshape(input_layer, [-1]),
1819
{'input_0': tf.io.FixedLenFeature([1], dtype=tf.float32)})

pyproject.toml

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -79,10 +79,11 @@ keywords = [
7979
[project.optional-dependencies]
8080
# LINT.IfChange
8181
examples = [
82-
"gunicorn==20.1.0",
83-
"tensorflow==2.10.0",
84-
"tensorflow-datasets==4.8.0",
85-
"tensorflow-text==2.10.0",
82+
"gunicorn>=20.1.0",
83+
"sentencepiece==0.1.99",
84+
"tensorflow>=2.10.0,<2.16.0",
85+
"tensorflow-datasets>=4.9.0",
86+
"tensorflow-text>=2.10.0,<2.16.0",
8687
"torch>=2.0.0",
8788
"transformers>=4.27.1",
8889
]

requirements_examples.txt

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -13,11 +13,11 @@
1313
# limitations under the License.
1414
# ==============================================================================
1515
# LINT.IfChange
16-
gunicorn==20.1.0
16+
gunicorn>=20.1.0
1717
sentencepiece==0.1.99
18-
tensorflow==2.10.0
19-
tensorflow-datasets==4.8.0
20-
tensorflow-text==2.10.0
18+
tensorflow>=2.10.0,<2.16.0
19+
tensorflow-datasets>=4.9.0
20+
tensorflow-text>=2.10.0,<2.16.0
2121
torch>=2.0.0
2222
transformers>=4.27.1
2323
# LINT.ThenChange(./pyproject.toml)

website/sphinx_src/components.md

Lines changed: 18 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -460,20 +460,34 @@ The UI supports multiple options for analysis, including:
460460

461461
For a walkthrough of how to use sequence salience to debug LLMs, check out the
462462
Responsible Generative AI Toolkit at
463-
https://ai.google.dev/responsible/model_behavior.
463+
https://ai.google.dev/responsible/model_behavior and for more on design of the
464+
system see our paper at https://arxiv.org/abs/2404.07498.
465+
466+
If you find this useful in your work, please cite Sequence Salience as:
467+
468+
```
469+
@article{tenney2024interactive,
470+
title={Interactive Prompt Debugging with Sequence Salience},
471+
author={Tenney, Ian and Mullins, Ryan and Du, Bin and Pandya, Shree and Kahng, Minsuk and Dixon, Lucas},
472+
journal={arXiv preprint arXiv:2404.07498},
473+
year={2024}
474+
}
475+
```
464476

465477
**Code:**
466478

467-
* LIT-for-Gemma Colab: [`lit_gemma.ipynb`](https://colab.research.google.com/github/google/generative-ai-docs/blob/main/site/en/gemma/docs/lit_gemma.ipynb)
479+
Currently, this works out-of-the-box with Gemma, Llama 2, Mistral, and GPT-2,
480+
using either KerasNLP or Transformers.
481+
482+
* LIT-for-Gemma Colab:
483+
[`lit_gemma.ipynb`](https://colab.research.google.com/github/google/generative-ai-docs/blob/main/site/en/gemma/docs/lit_gemma.ipynb)
468484
* Demo binary:
469485
[`lm_salience_demo.py`](https://github.com/PAIR-code/lit/blob/main/lit_nlp/examples/lm_salience_demo.py)
470486
* KerasNLP model wrappers:
471487
[`instrumented_keras_lms.py`](https://github.com/PAIR-code/lit/blob/main/lit_nlp/examples/models/instrumented_keras_lms.py)
472488
* Transformers model wrappers:
473489
[`pretrained_lms.py`](https://github.com/PAIR-code/lit/blob/main/lit_nlp/examples/models/pretrained_lms.py)
474490

475-
Currently, this works out-of-the-box
476-
with Gemma models (using Keras) as well as with GPT-2.
477491

478492

479493
## Salience Clustering

website/src/tutorials/sequence-salience.md

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -55,10 +55,11 @@ LIT supports additional LLMs, including [Llama 2][llama] and [Mistral][mistral],
5555
via the HuggingFace Transformers and KerasNLP libraries.
5656

5757
This tutorial was adapted from and expands upon LIT's contributions to the
58-
[Responsible Generative AI Tookit][rai_toolkit] and the related paper and
59-
[video][seqsal_video] submitted to the ACL 2024 Systems Demonstration track.
60-
This is an active and ongoing research area for the LIT team, so expect changes
61-
and further expansions to this tutorial over time.
58+
[Responsible Generative AI Tookit][rai_toolkit] and the related
59+
[paper][seqsal_paper] and [video][seqsal_video] submitted to the ACL 2024
60+
System Demonstrations track. This is an active and ongoing research area for
61+
the LIT team, so expect changes and further expansions to this tutorial over
62+
time.
6263

6364
## Case Study 1: Debugging Few-Shot Prompts
6465

@@ -486,6 +487,7 @@ helpful guides that can help you develop better prompts, including:
486487
[salience_research_1]: https://dl.acm.org/doi/full/10.1145/3639372
487488
[salience_research_2]: https://arxiv.org/abs/2402.01761
488489
[seqsal_docs]: ../../documentation/components.html#sequence-salience
490+
[seqsal_paper]: https://arxiv.org/abs/2404.07498
489491
[seqsal_video]: https://youtu.be/EZgUlnWdh0w
490492
[synapis]: https://scholarspace.manoa.hawaii.edu/items/65312e48-5954-4a5f-a1e8-e5119e6abc0a
491493
[toolformer]: https://arxiv.org/abs/2302.04761

0 commit comments

Comments
 (0)