Skip to content

Commit 3740546

Browse files
authored
gh: update links in GitHub templates (#19592)
1 parent 0b88204 commit 3740546

File tree

3 files changed

+9
-12
lines changed

3 files changed

+9
-12
lines changed

.github/ISSUE_TEMPLATE/2_refactor.yaml

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,6 @@ body:
3434
- [**Metrics**](https://github.com/Lightning-AI/metrics):
3535
Machine learning metrics for distributed, scalable PyTorch applications.
3636
enables pure PyTorch users to scale their existing code on any kind of device while retaining full control over their own loops and optimization logic.
37-
- [**Flash**](https://github.com/Lightning-AI/lightning-flash):
38-
The fastest way to get a Lightning baseline! A collection of tasks for fast prototyping, baselining, fine-tuning, and solving problems with deep learning.
39-
- [**Bolts**](https://github.com/Lightning-AI/lightning-bolts):
40-
Pretrained SOTA Deep Learning models, callbacks, and more for research and production with PyTorch Lightning and PyTorch.
37+
- [**GPT**](https://github.com/Lightning-AI/lit-GPT):
38+
Hackable implementation of state-of-the-art open-source LLMs based on nanoGPT.
39+
Supports flash attention, 4-bit and 8-bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed.

.github/ISSUE_TEMPLATE/3_feature_request.yaml

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,6 @@ body:
4040
- [**Metrics**](https://github.com/Lightning-AI/metrics):
4141
Machine learning metrics for distributed, scalable PyTorch applications.
4242
enables pure PyTorch users to scale their existing code on any kind of device while retaining full control over their own loops and optimization logic.
43-
- [**Flash**](https://github.com/Lightning-AI/lightning-flash):
44-
The fastest way to get a Lightning baseline! A collection of tasks for fast prototyping, baselining, fine-tuning, and solving problems with deep learning.
45-
- [**Bolts**](https://github.com/Lightning-AI/lightning-bolts):
46-
Pretrained SOTA Deep Learning models, callbacks, and more for research and production with PyTorch Lightning and PyTorch.
43+
- [**GPT**](https://github.com/Lightning-AI/lit-GPT):
44+
Hackable implementation of state-of-the-art open-source LLMs based on nanoGPT.
45+
Supports flash attention, 4-bit and 8-bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed.

.github/ISSUE_TEMPLATE/4_documentation.yaml

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,6 @@ body:
2323
- [**Metrics**](https://github.com/Lightning-AI/metrics):
2424
Machine learning metrics for distributed, scalable PyTorch applications.
2525
enables pure PyTorch users to scale their existing code on any kind of device while retaining full control over their own loops and optimization logic.
26-
- [**Flash**](https://github.com/Lightning-AI/lightning-flash):
27-
The fastest way to get a Lightning baseline! A collection of tasks for fast prototyping, baselining, fine-tuning, and solving problems with deep learning.
28-
- [**Bolts**](https://github.com/Lightning-AI/lightning-bolts):
29-
Pretrained SOTA Deep Learning models, callbacks, and more for research and production with PyTorch Lightning and PyTorch.
26+
- [**GPT**](https://github.com/Lightning-AI/lit-GPT):
27+
Hackable implementation of state-of-the-art open-source LLMs based on nanoGPT.
28+
Supports flash attention, 4-bit and 8-bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed.

0 commit comments

Comments
 (0)