|
1 | 1 | #------------------------------- |
2 | 2 | # Site Settings |
3 | 3 | title: AI Innovation Team |
4 | | -logo: # You can add own logo. For example '/images/logo.png'. |
| 4 | +logo: /images/toolkit logos/Black and White Labs/lab.png |
5 | 5 | author: AI Innovation Team |
6 | 6 | description: A collection of open-source generative AI tools made and maintained by a collective of AI researchers and engineers from Red Hat AI, MIT, IBM, UMass. |
7 | 7 |
|
@@ -38,58 +38,58 @@ hero: |
38 | 38 | portfolio: |
39 | 39 | portfolio__title: LLM Hubs |
40 | 40 | portfolio__gallery: |
41 | | - - image: /images/toolkit logos/logo-its-white_bg-v2.png |
| 41 | + - image: /images/toolkit logos/Black and White Labs/lab-its.png |
42 | 42 | title: its_hub |
43 | 43 | link: https://github.com/Red-Hat-AI-Innovation-Team/its_hub |
44 | 44 | description: Inference-time scaling for LLMs. |
45 | 45 |
|
46 | | - - image: /images/toolkit logos/sdg_hub.png |
| 46 | + - image: /images/toolkit logos/Black and White Labs/lab-sdg.png |
47 | 47 | title: sdg_hub |
48 | 48 | link: https://github.com/Red-Hat-AI-Innovation-Team/sdg_hub |
49 | | - description: Synthetic data generation pipelines for post-training |
| 49 | + description: Synthetic data generation pipelines |
50 | 50 |
|
51 | | - - image: /images/toolkit logos/training_hub.png |
| 51 | + - image: /images/toolkit logos/Black and White Labs/lab-training-hub.png |
52 | 52 | title: training_hub |
53 | 53 | link: https://github.com/Red-Hat-AI-Innovation-Team/training_hub |
54 | 54 | description: Post training algorithms for LLMs |
55 | 55 |
|
56 | 56 | other_repos: |
57 | 57 | title: LLM Tools |
58 | 58 | gallery: |
59 | | - - image: /images/toolkit logos/async-logo.png |
| 59 | + - image: /images/toolkit logos/Black and White Labs/lab-async-grpo.png |
60 | 60 | title: async-grpo |
61 | 61 | link: https://github.com/Red-Hat-AI-Innovation-Team/async-grpo |
62 | | - description: Asynchronous GRPO for scalable reinforcement learning |
| 62 | + description: Asynchronous GRPO for scalable reinforcement learning. |
63 | 63 |
|
64 | | - - image: /images/toolkit logos/training.png |
65 | | - title: training |
66 | | - link: https://github.com/instructlab/training |
67 | | - description: Efficient messages-format SFT library for language models |
68 | | - |
69 | | - - image: /images/toolkit logos/squat.jpg |
70 | | - title: SQuat |
71 | | - link: /squat/ |
72 | | - description: KV cache quantization for scaling inference time |
| 64 | + - image: /images/toolkit logos/Black and White Labs/lab-mini-trainer.png |
| 65 | + title: mini_trainer |
| 66 | + link: https://github.com/Red-Hat-AI-Innovation-Team/mini_trainer |
| 67 | + description: Efficient training library for large language models up to 70B parameters on a single node. |
73 | 68 |
|
74 | | - - image: /images/toolkit logos/orthogonal-subspace-learning.png |
| 69 | + - image: /images/toolkit logos/Black and White Labs/lab-osft.png |
75 | 70 | title: orthogonal-subspace-learning |
76 | 71 | link: https://github.com/Red-Hat-AI-Innovation-Team/orthogonal-subspace-learning |
77 | | - description: Adaptive SVD-based continual learning method for LLMs with negligible catastrophic forgetting. |
| 72 | + description: Adaptive SVD-based continual learning method for LLMs. |
78 | 73 |
|
79 | | - - image: /images/toolkit logos/logo-its-white_bg-v2.png |
| 74 | + - image: /images/toolkit logos/Black and White Labs/lab-probabilistic-its.png |
80 | 75 | title: probabilistic-inference-scaling |
81 | 76 | link: https://github.com/probabilistic-inference-scaling/probabilistic-inference-scaling |
82 | 77 | description: Inference-time scaling with particle filtering. |
83 | 78 |
|
84 | | - - image: /images/toolkit logos/logo-its-white_bg-v2.png |
| 79 | + - image: /images/toolkit logos/Black and White Labs/lab-reward-hub.png |
85 | 80 | title: reward_hub |
86 | 81 | link: https://github.com/Red-Hat-AI-Innovation-Team/reward_hub |
87 | 82 | description: State-of-the-art reward models for preference data generation and acceptance criteria. |
88 | 83 |
|
89 | | - - image: /images/toolkit logos/training.png |
90 | | - title: mini_trainer |
91 | | - link: https://github.com/Red-Hat-AI-Innovation-Team/mini_trainer |
92 | | - description: Efficient training library for large language models up to 70B parameters on a single node. |
| 84 | + - image: /images/toolkit logos/Black and White Labs/lab-squat.png |
| 85 | + title: SQuat |
| 86 | + link: /squat/ |
| 87 | + description: KV cache quantization for scaling inference time |
| 88 | + |
| 89 | + - image: /images/toolkit logos/Black and White Labs/lab-training.png |
| 90 | + title: training |
| 91 | + link: https://github.com/instructlab/training |
| 92 | + description: Efficient messages-format SFT library for language models |
93 | 93 |
|
94 | 94 | # - image: /images/gallery-06.jpg |
95 | 95 | # title: Isaac Benhesed |
|
0 commit comments