Skip to content

Facing issue in generate.py #8

@SarangShaikh201

Description

@SarangShaikh201

RuntimeError Traceback (most recent call last)
in
93 TOKENIZER, MODEL = load_models(MODEL_NAME)
94
---> 95 generate(TOKENIZER, MODEL, SENTENCES, LABEL, DEVICE)

in generate(tokenizer, model, sentences, label, device)
47
48 next_token_id = choose_from_top_k_top_n(softmax_logits.to('cpu').numpy()) #top-k-top-n sampling
---> 49 cur_ids = torch.cat([cur_ids, torch.ones((1,1)).long().to(device) * next_token_id], dim = 1)
50
51 if next_token_id in tokenizer.encode('<|endoftext|>'):

RuntimeError: All input tensors must be on the same device. Received cpu and cuda:0

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions