-
Notifications
You must be signed in to change notification settings - Fork 1
EN‐Wiki
Michael edited this page Sep 23, 2024
·
5 revisions
Interface has four main tabs: LLM, StableDiffusion, StableAudio and Interface. Select the one you need and follow the instructions below
- Here you can create a new or expand an existing dataset
- Datasets are saved in a folder datasets/llm
- First upload your models to the folder: models/llm
- Upload your dataset to the folder: datasets/llm
- Select your model and dataset from the drop-down lists
- Select a finetune method
- Write a name for the model
- Set up the model hyper-parameters for finetuning
- Click the
Submitbutton to receive the finetuned model
- First upload your models to the folder: finetuned-models/llm
- Upload your dataset to the folder: datasets/llm
- Select your models and dataset from the drop-down lists
- Set up the models parameters for evaluate
- Click the
Submitbutton to receive the evaluate of model
- First upload your models to the folder: finetuned-models/llm
- Select a Model and Quantization Type
- Click the
Submitbutton to receive the conversion of model
- Select your models from the drop-down list
- Set up the models according to the parameters you need
- Set up the models parameters to generate
- Click the
Submitbutton to receive the generated text
- Here you can create a new or expand an existing dataset
- Datasets are saved in a folder datasets/sd
- First upload your models to the folder: models/sd
- Upload your dataset to the folder: datasets/sd
- Select your model and dataset from the drop-down lists
- Select a model type and finetune method
- Write a name for the model
- Set up the model hyper-parameters for finetuning
- Click the
Submitbutton to receive the finetuned model
- First upload your models to the folder: finetuned-models/sd
- Upload your dataset to the folder: datasets/sd
- Select your models and dataset from the drop-down lists
- Select a model method and model type
- Enter your prompt
- Set up the models parameters for evaluate
- Click the
Submitbutton to receive the evaluate of model
- First upload your models to the folder: finetuned-models/sd
- Select a model type
- Set up the models parameters for convert
- Click the
Submitbutton to receive the conversion of model
- First upload your models to the folder: finetuned-models/sd
- Select a Model and Quantization Type
- Click the
Submitbutton to receive the conversion of model
- First upload your models to the folder: finetuned-models/sd
- Select your models from the drop-down list
- Select a model method and model type
- Enter your prompt
- Set up the models parameters to generate
- Click the
Submitbutton to receive the generated image
- Here you can create a new or expand an existing dataset
- Datasets are saved in a folder datasets/audio
- First upload your models to the folder: models/audio
- Upload your dataset to the folder: datasets/audio
- Select your model and dataset from the drop-down lists
- Write a name for the model
- Click the
Submitbutton to receive the finetuned model
- First upload your models to the folder: finetuned-models/audio
- Select your models from the drop-down list
- Enter your prompt
- Set up the models parameters to generate
- Click the
Submitbutton to receive the generated audio
- Here you can view online or offline wiki of project
- Here you can download
LLM,StableDiffusionandStableAudiomodels. Just choose the model from the drop-down list and click theSubmitbutton
- Here you can change the application settings
- Here you can see the indicators of your computer's sensors by clicking on the
Submitbutton
- All finetunes are saved in the finetuned-models folder
- You can press the
Clearbutton to reset your selection - You can turn off the application using the
Close terminalbutton - You can open the finetuned-models, datasets, and outputs folders by clicking on the folder name button
- You can reload interface dropdown lists by clicking on the
Reload interface
- LLM, StableDiffusion and StableAudio models can be taken from HuggingFace or from ModelDownloader inside interface
- LLM, StableDiffusion and StableAudio datasets can be taken from HuggingFace or you can create own datasets inside interface