Skip to content

Commit 99e8496

Browse files
author
Sanggyu Lee
committed
Rename
- forward_old → forward_org - output filename : llama → tinyllama - LlamaDecoderLayerWithCache → LlamaDecoderLayerWithKVCache
1 parent 5bc2ce6 commit 99e8496

File tree

2 files changed

+4
-4
lines changed

2 files changed

+4
-4
lines changed

test/modules/model/LlamaDecoderLayerWithCache/model.py renamed to test/modules/model/LlamaDecoderLayerWithKVCache/LlamaDecoderLayerWithCache/model.py

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -8,15 +8,15 @@
88

99
from transformers.models.llama.modeling_llama import LlamaDecoderLayer
1010

11-
forward_old = LlamaDecoderLayer.forward
11+
forward_org = LlamaDecoderLayer.forward
1212

1313

1414
def capture_and_forward(self, *args, **kwargs):
1515
global captured_input
1616

1717
# Prepare args tuple for TICO.convert()
1818
# Get arg_names in positional args order using inspect
19-
sig = inspect.signature(forward_old)
19+
sig = inspect.signature(forward_org)
2020
args_names = [
2121
# signature includes `self`` and `kwargs``.
2222
# Just retrieve the ordinary positional inputs only
@@ -38,7 +38,7 @@ def populate_args(args_dict, filter):
3838
input_to_remove = ["use_cache"]
3939
captured_input = populate_args(args_dict, input_to_remove)
4040

41-
return forward_old(self, *args, **kwargs)
41+
return forward_org(self, *args, **kwargs)
4242

4343

4444
# Tokenizer
@@ -82,4 +82,4 @@ def populate_args(args_dict, filter):
8282
model = AutoModelForCausalLM.from_pretrained(model_name)
8383
model.eval()
8484
circle_model = tico.convert(model.model.layers[0], captured_input)
85-
circle_model.save(f"llama.decoderlayer.circle")
85+
circle_model.save(f"tinyllama.decoderlayer.circle")

test/modules/model/LlamaDecoderLayerWithCache/requirements.txt renamed to test/modules/model/LlamaDecoderLayerWithKVCache/LlamaDecoderLayerWithCache/requirements.txt

File renamed without changes.

0 commit comments

Comments
 (0)