Skip to content

Add LVFace library #1685

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 7 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 18 additions & 0 deletions packages/tasks/src/model-libraries-snippets.ts
Original file line number Diff line number Diff line change
Expand Up @@ -1703,6 +1703,24 @@ export const vfimamba = (model: ModelData): string[] => [
model = Model.from_pretrained("${model.id}")`,
];

export const lvface = (model: ModelData): string[] => [
`## Initialize the inferencer from inference_onnx.py
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you please add imports 🥹 feel free to remove comments to make it more minimal

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes^ - let's make the snippets complete and minimal.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Excuse me, would it be necessary for me to delete some of the comments to reduce the number of lines taken by them?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think in general we keep them minimal, so lesser the better

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think in general we keep them minimal, so lesser the better

Thank you for the reminder. I have already compressed it to just one line.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Apologies for the confusion here @GitHup2016yjh - what we meant by minimal snippet was to have something that enables people to test your model in say a Google Colab without looking at your GitHub repo or documentation for more details.

You can see couple of examples of this in the rest of the snippets in the same file.

An ideal snippet has three main parts:

  1. Setting up environment in the commments (# pip instal x y z)
  2. Load the model weights from hugging face (inferencer in your case)
  3. Run inference on a sample input

Let me know if this is not clear - happy to elaborate more.

Thanks again for your contribution!

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Apologies for the confusion here @GitHup2016yjh - what we meant by minimal snippet was to have something that enables people to test your model in say a Google Colab without looking at your GitHub repo or documentation for more details.

You can see couple of examples of this in the rest of the snippets in the same file.

An ideal snippet has three main parts:

  1. Setting up environment in the commments (# pip instal x y z)
  2. Load the model weights from hugging face (inferencer in your case)
  3. Run inference on a sample input

Let me know if this is not clear - happy to elaborate more.

Thanks again for your contribution!

Thank you for the reminder. I have revised this part of the content. Is it correct now?

inferencer = LVFaceONNXInferencer(
model_path=${model.id}, # Path to your ONNX model
use_gpu=True # Set to False for CPU-only inference
)

## Extract feature from local image
feat1 = inferencer.infer_from_image("path/to/image1.jpg")

## Extract feature from URL
feat2 = inferencer.infer_from_url("https://example.com/image1.jpg")

## Calculate cosine similarity
similarity = inferencer.calculate_similarity(feat1, feat2)
print(f"Similarity score: {similarity:.6f}") # Output example: 0.872345`,
];

export const voicecraft = (model: ModelData): string[] => [
`from voicecraft import VoiceCraft

Expand Down
7 changes: 7 additions & 0 deletions packages/tasks/src/model-libraries.ts
Original file line number Diff line number Diff line change
Expand Up @@ -1156,6 +1156,13 @@ export const MODEL_LIBRARIES_UI_ELEMENTS = {
countDownloads: `path_extension:"pkl"`,
snippets: snippets.vfimamba,
},
"LVFace": {
prettyLabel: "LVFace",
repoName: "LVFace",
repoUrl: "https://github.com/bytedance/LVFace",
countDownloads: `path_extension:"pt" OR path_extension:"onnx"`,
snippets: snippets.lvface,
},
voicecraft: {
prettyLabel: "VoiceCraft",
repoName: "VoiceCraft",
Expand Down
Loading