Skip to content

Commit cc88b06

Browse files
committed
Remove workaround for llama 3 because it is now supported by llama.cpp
1 parent 9b7ccb0 commit cc88b06

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

quantize_weights_for_llama.cpp.ps1

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -65,7 +65,7 @@ ForEach ($repositoryName in $repositoryDirectories) {
6565
$convertParameters = "--outfile `"${unquantizedModelPath}`" `"${sourceDirectoryPath}`""
6666

6767
# Some models have a Byte Pair Encoding (BPE) vocabulary type.
68-
if (@("Smaug-72B-v0.1", "Meta-Llama-3-8B-Instruct", "Meta-Llama-3-70B-Instruct").Contains($repositoryName)) {
68+
if (@("Smaug-72B-v0.1").Contains($repositoryName)) {
6969
$convertParameters = "--vocab-type `"bpe`" --pad-vocab $convertParameters"
7070
}
7171

0 commit comments

Comments
 (0)