File tree Expand file tree Collapse file tree 2 files changed +5
-4
lines changed
mistral/mistral-7b-instruct Expand file tree Collapse file tree 2 files changed +5
-4
lines changed Original file line number Diff line number Diff line change @@ -6,7 +6,7 @@ model_cache:
6
6
- ' *.json'
7
7
- ' *.safetensors'
8
8
- ' *.model'
9
- repo_id : mistralai/Mistral-7B-Instruct-v0.1
9
+ repo_id : mistralai/Mistral-7B-Instruct-v0.2
10
10
model_metadata :
11
11
avatar_url : https://cdn.baseten.co/production/static/explore/mistral_logo.png
12
12
cover_image_url : https://cdn.baseten.co/production/static/explore/mistral.png
@@ -20,7 +20,7 @@ python_version: py311
20
20
requirements :
21
21
- sentencepiece
22
22
- accelerate
23
- - transformers==4.34.0
23
+ - transformers==4.38.1
24
24
- torch==2.0.1
25
25
resources :
26
26
accelerator : A10G
Original file line number Diff line number Diff line change @@ -13,16 +13,17 @@ class Model:
13
13
def __init__ (self , ** kwargs ):
14
14
self .tokenizer = None
15
15
self .model = None
16
+ self ._config = kwargs ["config" ]
16
17
17
18
def load (self ):
18
19
self .model = AutoModelForCausalLM .from_pretrained (
19
- "mistralai/Mistral-7B-Instruct-v0.1" ,
20
+ self . _config [ "model_cache" ][ 0 ][ "repo_id" ] ,
20
21
torch_dtype = torch .float16 ,
21
22
device_map = "auto" ,
22
23
)
23
24
24
25
self .tokenizer = AutoTokenizer .from_pretrained (
25
- "mistralai/Mistral-7B-Instruct-v0.1" ,
26
+ self . _config [ "model_cache" ][ 0 ][ "repo_id" ] ,
26
27
device_map = "auto" ,
27
28
torch_dtype = torch .float16 ,
28
29
)
You can’t perform that action at this time.
0 commit comments