|
1 | | ---- |
2 | | -- type: model |
3 | | - name: ACT-1 |
4 | | - organization: Adept |
5 | | - description: ACT-1 (ACtion Transformer) is a large-scale transformer model designed |
6 | | - and trained specifically for taking actions on computers (use software tools |
7 | | - APIs and websites) in response to the user's natural language commands. |
| 1 | +- access: closed |
| 2 | + analysis: '' |
8 | 3 | created_date: |
9 | 4 | explanation: The date the model was announced in the [[Adept blog post]](https://www.adept.ai/blog/act-1). |
10 | 5 | value: 2022-09-14 |
11 | | - url: https://www.adept.ai/blog/act-1 |
12 | | - model_card: none |
| 6 | + dependencies: [] |
| 7 | + description: ACT-1 (ACtion Transformer) is a large-scale transformer model designed |
| 8 | + and trained specifically for taking actions on computers (use software tools APIs |
| 9 | + and websites) in response to the user's natural language commands. |
| 10 | + feedback: '' |
| 11 | + intended_uses: '' |
| 12 | + license: unknown |
13 | 13 | modality: text; text |
14 | | - analysis: '' |
| 14 | + model_card: none |
| 15 | + monitoring: '' |
| 16 | + name: ACT-1 |
| 17 | + nationality: USA |
| 18 | + organization: Adept |
| 19 | + prohibited_uses: '' |
| 20 | + quality_control: '' |
15 | 21 | size: '' |
16 | | - dependencies: [] |
17 | 22 | training_emissions: unknown |
18 | | - training_time: unknown |
19 | 23 | training_hardware: unknown |
20 | | - quality_control: '' |
21 | | - access: closed |
22 | | - license: unknown |
| 24 | + training_time: unknown |
| 25 | + type: model |
| 26 | + url: https://www.adept.ai/blog/act-1 |
| 27 | +- access: open |
| 28 | + analysis: Evaluated in comparison to LLaMA 2 and MPT Instruct, and outperforms both |
| 29 | + on standard benchmarks. |
| 30 | + created_date: 2023-09-07 |
| 31 | + dependencies: [] |
| 32 | + description: Persimmon is the most capable open-source, fully permissive model with |
| 33 | + fewer than 10 billion parameters, as of its release date. |
| 34 | + feedback: '' |
23 | 35 | intended_uses: '' |
24 | | - prohibited_uses: '' |
| 36 | + license: Apache 2.0 |
| 37 | + modality: text; text |
| 38 | + model_card: '' |
25 | 39 | monitoring: '' |
26 | | - feedback: '' |
27 | | -- type: model |
28 | 40 | name: Persimmon |
| 41 | + nationality: USA |
29 | 42 | organization: Adept |
30 | | - description: Persimmon is the most capable open-source, fully permissive model |
31 | | - with fewer than 10 billion parameters, as of its release date. |
32 | | - created_date: 2023-09-07 |
33 | | - url: https://www.adept.ai/blog/persimmon-8b |
34 | | - model_card: '' |
35 | | - modality: text; text |
36 | | - analysis: Evaluated in comparison to LLaMA 2 and MPT Instruct, and outperforms |
37 | | - both on standard benchmarks. |
| 43 | + prohibited_uses: '' |
| 44 | + quality_control: '' |
38 | 45 | size: 8B parameters (dense) |
39 | | - dependencies: [] |
40 | 46 | training_emissions: '' |
41 | | - training_time: '' |
42 | 47 | training_hardware: '' |
43 | | - quality_control: '' |
44 | | - access: open |
45 | | - license: Apache 2.0 |
46 | | - intended_uses: '' |
47 | | - prohibited_uses: '' |
48 | | - monitoring: '' |
49 | | - feedback: '' |
50 | | -- type: model |
51 | | - name: Fuyu |
52 | | - organization: Adept |
| 48 | + training_time: '' |
| 49 | + type: model |
| 50 | + url: https://www.adept.ai/blog/persimmon-8b |
| 51 | +- access: open |
| 52 | + analysis: Evaluated on standard image understanding benchmarks. |
| 53 | + created_date: 2023-10-17 |
| 54 | + dependencies: [] |
53 | 55 | description: Fuyu is a small version of the multimodal model that powers Adept's |
54 | 56 | core product. |
55 | | - created_date: 2023-10-17 |
56 | | - url: https://www.adept.ai/blog/fuyu-8b |
57 | | - model_card: https://huggingface.co/adept/fuyu-8b |
| 57 | + feedback: https://huggingface.co/adept/fuyu-8b/discussions |
| 58 | + intended_uses: The model is intended for research purposes only. |
| 59 | + license: CC-BY-NC-4.0 |
58 | 60 | modality: image, text; text |
59 | | - analysis: Evaluated on standard image understanding benchmarks. |
| 61 | + model_card: https://huggingface.co/adept/fuyu-8b |
| 62 | + monitoring: '' |
| 63 | + name: Fuyu |
| 64 | + nationality: USA |
| 65 | + organization: Adept |
| 66 | + prohibited_uses: The model was not trained to be factual or true representations |
| 67 | + of people or events, and therefore using the model to generate such content is |
| 68 | + out-of-scope for the abilities of this model. |
| 69 | + quality_control: none |
60 | 70 | size: 8B parameters (dense) |
61 | | - dependencies: [] |
62 | 71 | training_emissions: unknown |
63 | | - training_time: unknown |
64 | 72 | training_hardware: unknown |
65 | | - quality_control: none |
66 | | - access: open |
67 | | - license: CC-BY-NC-4.0 |
68 | | - intended_uses: The model is intended for research purposes only. |
69 | | - prohibited_uses: The model was not trained to be factual or true representations |
70 | | - of people or events, and therefore using the model to generate such content |
71 | | - is out-of-scope for the abilities of this model. |
72 | | - monitoring: '' |
73 | | - feedback: https://huggingface.co/adept/fuyu-8b/discussions |
74 | | -- type: model |
75 | | - name: Fuyu Heavy |
76 | | - organization: Adept |
| 73 | + training_time: unknown |
| 74 | + type: model |
| 75 | + url: https://www.adept.ai/blog/fuyu-8b |
| 76 | +- access: closed |
| 77 | + analysis: Evaluated on the MMLU, GSM8K, MATH, and HumanEval benchmarks. According |
| 78 | + to these benchmarks, Fuyu-Heavy is, as of release, the strongest multimodal model |
| 79 | + trained outside of Google or OpenAI. |
| 80 | + created_date: 2024-01-24 |
| 81 | + dependencies: [] |
77 | 82 | description: Fuyu Heavy is a new multimodal model designed specifically for digital |
78 | 83 | agents. |
79 | | - created_date: 2024-01-24 |
80 | | - url: https://www.adept.ai/blog/adept-fuyu-heavy |
81 | | - model_card: none |
| 84 | + feedback: none |
| 85 | + intended_uses: unknown |
| 86 | + license: unknown |
82 | 87 | modality: image, text; text |
83 | | - analysis: Evaluated on the MMLU, GSM8K, MATH, and HumanEval benchmarks. According |
84 | | - to these benchmarks, Fuyu-Heavy is, as of release, the strongest multimodal |
85 | | - model trained outside of Google or OpenAI. |
| 88 | + model_card: none |
| 89 | + monitoring: '' |
| 90 | + name: Fuyu Heavy |
| 91 | + nationality: USA |
| 92 | + organization: Adept |
| 93 | + prohibited_uses: none |
| 94 | + quality_control: none |
86 | 95 | size: |
87 | 96 | explanation: The size of the model is 10-20 times smaller than GPT-4V and Gemini |
88 | 97 | Ultra, as per announcement. |
89 | 98 | value: unknown |
90 | | - dependencies: [] |
91 | 99 | training_emissions: unknown |
92 | | - training_time: unknown |
93 | 100 | training_hardware: unknown |
94 | | - quality_control: none |
95 | | - access: closed |
96 | | - license: unknown |
97 | | - intended_uses: unknown |
98 | | - prohibited_uses: none |
99 | | - monitoring: '' |
100 | | - feedback: none |
| 101 | + training_time: unknown |
| 102 | + type: model |
| 103 | + url: https://www.adept.ai/blog/adept-fuyu-heavy |
0 commit comments