Skip to content

Conversation

AlannaBurke
Copy link

I split the docs work into 2 PRs, this is the first. Let's get this merged ASAP.

AlannaBurke and others added 19 commits October 8, 2025 16:58
…rehensive getting started with dependency details, and expanded concepts page with Monarch, Services, TorchStore, and RL workflows
Co-authored-by: Svetlana Karslioglu <[email protected]>
Co-authored-by: Svetlana Karslioglu <[email protected]>
Co-authored-by: Svetlana Karslioglu <[email protected]>
Co-authored-by: Svetlana Karslioglu <[email protected]>
Co-authored-by: Svetlana Karslioglu <[email protected]>
Co-authored-by: Svetlana Karslioglu <[email protected]>
- Enhanced homepage with Monarch foundation emphasis, technology stack highlights, validated examples, and clear navigation paths
- Expanded getting started with detailed dependency explanations (Monarch, vLLM, TorchTitan, TorchStore, PyTorch Nightly)
- Converted installation and verification steps to numbered lists for better readability
- Removed FAQ references as FAQ page has been removed
- Fixed GPU/process terminology in code examples
@meta-cla meta-cla bot added the CLA Signed This label is managed by the Meta Open Source bot. label Oct 17, 2025
@AlannaBurke AlannaBurke force-pushed the docs/pr1-homepage-getting-started-only branch from 5151f64 to f9b136a Compare October 17, 2025 01:51
@svekars
Copy link
Contributor

svekars commented Oct 17, 2025

Need to clean up those references to usage.md:

https://github.com/meta-pytorch/forge/actions/runs/18579946858/job/52972683886?pr=448#step:11:103

conda activate forge
```

3. **Run Installation Script**
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this may be ok for now, but even possibly as soon as EOD today we may have different instructions cc @joecummings

If we keep a script, what the script does will be different. I think we can ship this for now, and update this once we're done

Fine-tune Llama 3 8B on your data. **Requires: 2+ GPUs**

1. **Download the Model**

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ebsmothers @daniellepintz - could you two please review these commands for SFT and ensure this is what we want?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@AlannaBurke let's use this command

hf download meta-llama/Meta-Llama-3.1-8B-Instruct --local-dir /tmp/Meta-Llama-3.1-8B-Instruct --exclude "original/consolidated.00.pth"

- Steps to reproduce
- Expected vs actual behavior

**Diagnostic command:**
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is a really good idea - let's keep this, and I think we should come up with a script for this in our issue templates..

cc @joecummings @daniellepintz ? not sure who to tag here

Copy link
Contributor

@allenwang28 allenwang28 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks @AlannaBurke !

Fine-tune Llama 3 8B on your data. **Requires: 2+ GPUs**

1. **Download the Model**

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@AlannaBurke let's use this command

hf download meta-llama/Meta-Llama-3.1-8B-Instruct --local-dir /tmp/Meta-Llama-3.1-8B-Instruct --exclude "original/consolidated.00.pth"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Meta Open Source bot.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants