Skip to content

How can I fine tuning the downstream tasks on my own server?  #85

@nanu23333

Description

@nanu23333

Hi,

Thanks for your great work! It truly provides biologists like me with a new perspective.

I am new to transformers and Hugging Face, and I have just started learning by following the official tutorials. I am very interested in fine-tuning the models on my own server with GPU support.

Specifically, I want to use the model to predict whether a series of DNA sequences are enhancers or not. However, I have a few questions:

  1. How can I load the train and test datasets provided for downstream tasks on Hugging Face? Should I preprocess or transform them before fine-tuning the model?

  2. Is there a way to run the training and inference code entirely on my own server rather than using Hugging Face's platform? Could you share example code for that?

Thank you so much for your help. Any guidance or suggestions would be greatly appreciated!

Best regards,

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions