Skip to content

Commit c850083

Browse files
authored
Add support for OLMo2 (#1076)
1 parent 6f27a10 commit c850083

File tree

5 files changed

+66
-2
lines changed

5 files changed

+66
-2
lines changed

README.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -371,7 +371,8 @@ You can refine your search by selecting the task you're interested in (e.g., [te
371371
1. **[MT5](https://huggingface.co/docs/transformers/model_doc/mt5)** (from Google AI) released with the paper [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) by Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel.
372372
1. **[NLLB](https://huggingface.co/docs/transformers/model_doc/nllb)** (from Meta) released with the paper [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672) by the NLLB team.
373373
1. **[Nougat](https://huggingface.co/docs/transformers/model_doc/nougat)** (from Meta AI) released with the paper [Nougat: Neural Optical Understanding for Academic Documents](https://arxiv.org/abs/2308.13418) by Lukas Blecher, Guillem Cucurull, Thomas Scialom, Robert Stojnic.
374-
1. **[OLMo](https://huggingface.co/docs/transformers/master/model_doc/olmo)** (from AI2) released with the paper [OLMo: Accelerating the Science of Language Models](https://arxiv.org/abs/2402.00838) by Dirk Groeneveld, Iz Beltagy, Pete Walsh, Akshita Bhagia, Rodney Kinney, Oyvind Tafjord, Ananya Harsh Jha, Hamish Ivison, Ian Magnusson, Yizhong Wang, Shane Arora, David Atkinson, Russell Authur, Khyathi Raghavi Chandu, Arman Cohan, Jennifer Dumas, Yanai Elazar, Yuling Gu, Jack Hessel, Tushar Khot, William Merrill, Jacob Morrison, Niklas Muennighoff, Aakanksha Naik, Crystal Nam, Matthew E. Peters, Valentina Pyatkin, Abhilasha Ravichander, Dustin Schwenk, Saurabh Shah, Will Smith, Emma Strubell, Nishant Subramani, Mitchell Wortsman, Pradeep Dasigi, Nathan Lambert, Kyle Richardson, Luke Zettlemoyer, Jesse Dodge, Kyle Lo, Luca Soldaini, Noah A. Smith, Hannaneh Hajishirzi.
374+
1. **[OLMo](https://huggingface.co/docs/transformers/master/model_doc/olmo)** (from Ai2) released with the paper [OLMo: Accelerating the Science of Language Models](https://arxiv.org/abs/2402.00838) by Dirk Groeneveld, Iz Beltagy, Pete Walsh, Akshita Bhagia, Rodney Kinney, Oyvind Tafjord, Ananya Harsh Jha, Hamish Ivison, Ian Magnusson, Yizhong Wang, Shane Arora, David Atkinson, Russell Authur, Khyathi Raghavi Chandu, Arman Cohan, Jennifer Dumas, Yanai Elazar, Yuling Gu, Jack Hessel, Tushar Khot, William Merrill, Jacob Morrison, Niklas Muennighoff, Aakanksha Naik, Crystal Nam, Matthew E. Peters, Valentina Pyatkin, Abhilasha Ravichander, Dustin Schwenk, Saurabh Shah, Will Smith, Emma Strubell, Nishant Subramani, Mitchell Wortsman, Pradeep Dasigi, Nathan Lambert, Kyle Richardson, Luke Zettlemoyer, Jesse Dodge, Kyle Lo, Luca Soldaini, Noah A. Smith, Hannaneh Hajishirzi.
375+
1. **[OLMo2](https://huggingface.co/docs/transformers/master/model_doc/olmo2)** (from Ai2) released with the blog [OLMo 2: The best fully open language model to date](https://allenai.org/blog/olmo2) by the Ai2 OLMo team.
375376
1. **OpenELM** (from Apple) released with the paper [OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework](https://arxiv.org/abs/2404.14619) by Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari.
376377
1. **[OPT](https://huggingface.co/docs/transformers/master/model_doc/opt)** (from Meta AI) released with the paper [OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) by Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al.
377378
1. **[OWL-ViT](https://huggingface.co/docs/transformers/model_doc/owlvit)** (from Google AI) released with the paper [Simple Open-Vocabulary Object Detection with Vision Transformers](https://arxiv.org/abs/2205.06230) by Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, and Neil Houlsby.

docs/snippets/6_supported-models.snippet

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -86,7 +86,8 @@
8686
1. **[MT5](https://huggingface.co/docs/transformers/model_doc/mt5)** (from Google AI) released with the paper [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) by Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel.
8787
1. **[NLLB](https://huggingface.co/docs/transformers/model_doc/nllb)** (from Meta) released with the paper [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672) by the NLLB team.
8888
1. **[Nougat](https://huggingface.co/docs/transformers/model_doc/nougat)** (from Meta AI) released with the paper [Nougat: Neural Optical Understanding for Academic Documents](https://arxiv.org/abs/2308.13418) by Lukas Blecher, Guillem Cucurull, Thomas Scialom, Robert Stojnic.
89-
1. **[OLMo](https://huggingface.co/docs/transformers/master/model_doc/olmo)** (from AI2) released with the paper [OLMo: Accelerating the Science of Language Models](https://arxiv.org/abs/2402.00838) by Dirk Groeneveld, Iz Beltagy, Pete Walsh, Akshita Bhagia, Rodney Kinney, Oyvind Tafjord, Ananya Harsh Jha, Hamish Ivison, Ian Magnusson, Yizhong Wang, Shane Arora, David Atkinson, Russell Authur, Khyathi Raghavi Chandu, Arman Cohan, Jennifer Dumas, Yanai Elazar, Yuling Gu, Jack Hessel, Tushar Khot, William Merrill, Jacob Morrison, Niklas Muennighoff, Aakanksha Naik, Crystal Nam, Matthew E. Peters, Valentina Pyatkin, Abhilasha Ravichander, Dustin Schwenk, Saurabh Shah, Will Smith, Emma Strubell, Nishant Subramani, Mitchell Wortsman, Pradeep Dasigi, Nathan Lambert, Kyle Richardson, Luke Zettlemoyer, Jesse Dodge, Kyle Lo, Luca Soldaini, Noah A. Smith, Hannaneh Hajishirzi.
89+
1. **[OLMo](https://huggingface.co/docs/transformers/master/model_doc/olmo)** (from Ai2) released with the paper [OLMo: Accelerating the Science of Language Models](https://arxiv.org/abs/2402.00838) by Dirk Groeneveld, Iz Beltagy, Pete Walsh, Akshita Bhagia, Rodney Kinney, Oyvind Tafjord, Ananya Harsh Jha, Hamish Ivison, Ian Magnusson, Yizhong Wang, Shane Arora, David Atkinson, Russell Authur, Khyathi Raghavi Chandu, Arman Cohan, Jennifer Dumas, Yanai Elazar, Yuling Gu, Jack Hessel, Tushar Khot, William Merrill, Jacob Morrison, Niklas Muennighoff, Aakanksha Naik, Crystal Nam, Matthew E. Peters, Valentina Pyatkin, Abhilasha Ravichander, Dustin Schwenk, Saurabh Shah, Will Smith, Emma Strubell, Nishant Subramani, Mitchell Wortsman, Pradeep Dasigi, Nathan Lambert, Kyle Richardson, Luke Zettlemoyer, Jesse Dodge, Kyle Lo, Luca Soldaini, Noah A. Smith, Hannaneh Hajishirzi.
90+
1. **[OLMo2](https://huggingface.co/docs/transformers/master/model_doc/olmo2)** (from Ai2) released with the blog [OLMo 2: The best fully open language model to date](https://allenai.org/blog/olmo2) by the Ai2 OLMo team.
9091
1. **OpenELM** (from Apple) released with the paper [OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework](https://arxiv.org/abs/2404.14619) by Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari.
9192
1. **[OPT](https://huggingface.co/docs/transformers/master/model_doc/opt)** (from Meta AI) released with the paper [OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) by Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al.
9293
1. **[OWL-ViT](https://huggingface.co/docs/transformers/model_doc/owlvit)** (from Google AI) released with the paper [Simple Open-Vocabulary Object Detection with Vision Transformers](https://arxiv.org/abs/2205.06230) by Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, and Neil Houlsby.

src/configs.js

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -104,6 +104,7 @@ function getNormalizedConfig(config) {
104104
break;
105105
case 'llama':
106106
case 'olmo':
107+
case 'olmo2':
107108
case 'mobilellm':
108109
case 'granite':
109110
case 'cohere':

src/models.js

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4112,6 +4112,13 @@ export class OlmoModel extends OlmoPreTrainedModel { }
41124112
export class OlmoForCausalLM extends OlmoPreTrainedModel { }
41134113
//////////////////////////////////////////////////
41144114

4115+
//////////////////////////////////////////////////
4116+
// OLMo2 models
4117+
export class Olmo2PreTrainedModel extends PreTrainedModel { }
4118+
export class Olmo2Model extends Olmo2PreTrainedModel { }
4119+
export class Olmo2ForCausalLM extends Olmo2PreTrainedModel { }
4120+
//////////////////////////////////////////////////
4121+
41154122

41164123
//////////////////////////////////////////////////
41174124
// Granite models
@@ -6877,6 +6884,7 @@ const MODEL_MAPPING_NAMES_DECODER_ONLY = new Map([
68776884
['codegen', ['CodeGenModel', CodeGenModel]],
68786885
['llama', ['LlamaModel', LlamaModel]],
68796886
['olmo', ['OlmoModel', OlmoModel]],
6887+
['olmo2', ['Olmo2Model', Olmo2Model]],
68806888
['mobilellm', ['MobileLLMModel', MobileLLMModel]],
68816889
['granite', ['GraniteModel', GraniteModel]],
68826890
['cohere', ['CohereModel', CohereModel]],
@@ -6968,6 +6976,7 @@ const MODEL_FOR_CAUSAL_LM_MAPPING_NAMES = new Map([
69686976
['codegen', ['CodeGenForCausalLM', CodeGenForCausalLM]],
69696977
['llama', ['LlamaForCausalLM', LlamaForCausalLM]],
69706978
['olmo', ['OlmoForCausalLM', OlmoForCausalLM]],
6979+
['olmo2', ['Olmo2ForCausalLM', Olmo2ForCausalLM]],
69716980
['mobilellm', ['MobileLLMForCausalLM', MobileLLMForCausalLM]],
69726981
['granite', ['GraniteForCausalLM', GraniteForCausalLM]],
69736982
['cohere', ['CohereForCausalLM', CohereForCausalLM]],

tests/tiny_random.test.js

Lines changed: 52 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -25,6 +25,7 @@ import {
2525
// Models
2626
LlamaForCausalLM,
2727
OlmoForCausalLM,
28+
Olmo2ForCausalLM,
2829
GraniteForCausalLM,
2930
CohereModel,
3031
CohereForCausalLM,
@@ -1369,6 +1370,57 @@ describe("Tiny random models", () => {
13691370
});
13701371
});
13711372

1373+
describe("olmo2", () => {
1374+
describe("Olmo2ForCausalLM", () => {
1375+
const model_id = "hf-internal-testing/tiny-random-Olmo2ForCausalLM";
1376+
/** @type {Olmo2ForCausalLM} */
1377+
let model;
1378+
/** @type {GPT2Tokenizer} */
1379+
let tokenizer;
1380+
beforeAll(async () => {
1381+
model = await Olmo2ForCausalLM.from_pretrained(model_id, {
1382+
// TODO move to config
1383+
...DEFAULT_MODEL_OPTIONS,
1384+
});
1385+
tokenizer = await GPT2Tokenizer.from_pretrained(model_id);
1386+
tokenizer.padding_side = "left";
1387+
}, MAX_MODEL_LOAD_TIME);
1388+
1389+
it(
1390+
"batch_size=1",
1391+
async () => {
1392+
const inputs = tokenizer("hello");
1393+
const outputs = await model.generate({
1394+
...inputs,
1395+
max_length: 10,
1396+
});
1397+
expect(outputs.tolist()).toEqual([[15339n, 50957n, 43410n, 77030n, 91444n, 99516n, 80720n, 4608n, 90428n, 22806n]]);
1398+
},
1399+
MAX_TEST_EXECUTION_TIME,
1400+
);
1401+
1402+
it(
1403+
"batch_size>1",
1404+
async () => {
1405+
const inputs = tokenizer(["hello", "hello world"], { padding: true });
1406+
const outputs = await model.generate({
1407+
...inputs,
1408+
max_length: 10,
1409+
});
1410+
expect(outputs.tolist()).toEqual([
1411+
[100277n, 15339n, 50957n, 43410n, 77030n, 91444n, 99516n, 80720n, 4608n, 90428n],
1412+
[15339n, 1917n, 12095n, 21350n, 61586n, 19306n, 39486n, 91527n, 59768n, 31934n],
1413+
]);
1414+
},
1415+
MAX_TEST_EXECUTION_TIME,
1416+
);
1417+
1418+
afterAll(async () => {
1419+
await model?.dispose();
1420+
}, MAX_MODEL_DISPOSE_TIME);
1421+
});
1422+
});
1423+
13721424
describe("granite", () => {
13731425
describe("GraniteForCausalLM", () => {
13741426
const model_id = "hf-internal-testing/tiny-random-GraniteForCausalLM";

0 commit comments

Comments
 (0)