Skip to content

Commit 50ba949

Browse files
committed
linkcheck: update links
1 parent 8161a1d commit 50ba949

File tree

3 files changed

+3
-3
lines changed

3 files changed

+3
-3
lines changed

fine-tuning.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ Some ideas:
1010
- [Why You (Probably) Don't Need to Fine-tune an LLM](https://www.tidepool.so/2023/08/17/why-you-probably-dont-need-to-fine-tune-an-llm/) (instead, use few-shot prompting & retrieval-augmented generation)
1111
- [Fine-Tuning LLaMA-2: A Comprehensive Case Study for Tailoring Models to Unique Applications](https://www.anyscale.com/blog/fine-tuning-llama-2-a-comprehensive-case-study-for-tailoring-models-to-unique-applications) (fine-tuning LLaMA-2 for 3 real-world use cases)
1212
- [Private, local, open source LLMs](https://python.langchain.com/docs/guides/local_llms)
13-
- [Easy-to-use LLM fine-tuning framework (LLaMA-2, BLOOM, Falcon, Baichuan, Qwen, ChatGLM2)](https://github.com/hiyouga/LLaMA-Efficient-Tuning)
13+
- [Easy-to-use LLM fine-tuning framework (LLaMA-2, BLOOM, Falcon, Baichuan, Qwen, ChatGLM2)](https://github.com/hiyouga/LLaMA-Factory)
1414
- https://dstack.ai/examples/finetuning-llama-2
1515
- https://github.com/h2oai, etc.
1616
- [The History of Open-Source LLMs: Better Base Models (part 2)](https://cameronrwolfe.substack.com/p/the-history-of-open-source-llms-better) (LLaMA, MPT, Falcon, LLaMA-2)

references.bib

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -413,7 +413,7 @@ @online{cursor-llama
413413
title={Why {GPT-3.5} is (mostly) cheaper than {LLaMA-2}},
414414
author={Aman},
415415
year=2023,
416-
url={https://www.cursor.so/blog/llama-inference}
416+
url={https://cursor.sh/blog/llama-inference}
417417
}
418418
@online{vector-indexing,
419419
title={Vector databases: Not all indexes are created equal},

references.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ Couldn't decide which chapter(s) these links are related to. They're mostly abou
3232
- "How I Re-implemented PyTorch for WebGPU" (`webgpu-torch`: inference & autograd lib to run NNs in browser with negligible overhead) https://praeclarum.org/2023/05/19/webgpu-torch.html
3333
- "LLaMA from scratch (or how to implement a paper without crying)" (misc tips, scaled-down version of LLaMA for training) https://blog.briankitano.com/llama-from-scratch
3434
- "Swift Transformers: Run On-Device LLMs in Apple Devices" https://huggingface.co/blog/swift-coreml-llm
35-
- "Why GPT-3.5-turbo is (mostly) cheaper than LLaMA-2" https://www.cursor.so/blog/llama-inference#user-content-fn-gpt4-leak
35+
- "Why GPT-3.5-turbo is (mostly) cheaper than LLaMA-2" https://cursor.sh/blog/llama-inference#user-content-fn-gpt4-leak
3636
- http://marble.onl/posts/why_host_your_own_llm.html
3737
- https://betterprogramming.pub/you-dont-need-hosted-llms-do-you-1160b2520526
3838
- "Low-code framework for building custom LLMs, neural networks, and other AI models" https://github.com/ludwig-ai/ludwig

0 commit comments

Comments
 (0)