Skip to content

[error need help] error to run longbench V1 following the document #125

@gaowayne

Description

@gaowayne

I following V1 steps from readme,
here is the error

(LongBench) root@salab-hpedl380g11-03:~/wayne/kvcache/LongBench/LongBench# CUDA_VISIBLE_DEVICES=0 python pred.py --model chatglm3-6b-32k
README.md: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16.0k/16.0k [00:00<00:00, 109MB/s]
LongBench.py: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3.98k/3.98k [00:00<00:00, 55.7MB/s]
0000.parquet: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.31M/1.31M [00:00<00:00, 18.4MB/s]
Generating test split: 100%|█████████████████████████████████████████████████████████████████████████████████████████████| 200/200 [00:00<00:00, 4384.39 examples/s]
tokenizer_config.json: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████| 522/522 [00:00<00:00, 5.51MB/s]
tokenization_chatglm.py: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████| 11.3k/11.3k [00:00<00:00, 50.1MB/s]
A new version of the following files was downloaded from https://huggingface.co/THUDM/chatglm3-6b-32k:
- tokenization_chatglm.py
. Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision.
tokenizer.model: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.02M/1.02M [00:00<00:00, 15.6MB/s]
config.json: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.34k/1.34k [00:00<00:00, 7.42MB/s]
configuration_chatglm.py: 100%|████████████████████████████████████████████████████████████████████████████████████████████████| 2.39k/2.39k [00:00<00:00, 37.1MB/s]
A new version of the following files was downloaded from https://huggingface.co/THUDM/chatglm3-6b-32k:
- configuration_chatglm.py
. Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision.
modeling_chatglm.py: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████| 55.8k/55.8k [00:00<00:00, 10.0MB/s]
quantization.py: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████| 14.7k/14.7k [00:00<00:00, 45.9MB/s]
A new version of the following files was downloaded from https://huggingface.co/THUDM/chatglm3-6b-32k:
- quantization.py
. Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision.
A new version of the following files was downloaded from https://huggingface.co/THUDM/chatglm3-6b-32k:
- modeling_chatglm.py
- quantization.py
. Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision.
pytorch_model.bin.index.json: 100%|█████████████████████████████████████████████████████████████████████████████████████████████| 20.4k/20.4k [00:00<00:00, 202MB/s]
model.safetensors.index.json: 100%|████████████████████████████████████████████████████████████████████████████████████████████| 21.2k/21.2k [00:00<00:00, 20.3MB/s]
pytorch_model-00002-of-00007.bin: 100%|████████████████████████████████████████████████████████████████████████████████████████| 1.97G/1.97G [00:52<00:00, 37.6MB/s]
pytorch_model-00007-of-00007.bin: 100%|████████████████████████████████████████████████████████████████████████████████████████| 1.05G/1.05G [00:56<00:00, 18.8MB/s]
pytorch_model-00004-of-00007.bin: 100%|████████████████████████████████████████████████████████████████████████████████████████| 1.82G/1.82G [01:13<00:00, 24.6MB/s]
pytorch_model-00003-of-00007.bin: 100%|████████████████████████████████████████████████████████████████████████████████████████| 1.93G/1.93G [01:38<00:00, 19.7MB/s]
pytorch_model-00005-of-00007.bin: 100%|████████████████████████████████████████████████████████████████████████████████████████| 1.97G/1.97G [01:42<00:00, 19.2MB/s]
pytorch_model-00006-of-00007.bin: 100%|████████████████████████████████████████████████████████████████████████████████████████| 1.93G/1.93G [01:47<00:00, 18.0MB/s]
pytorch_model-00001-of-00007.bin: 100%|████████████████████████████████████████████████████████████████████████████████████████| 1.83G/1.83G [01:48<00:00, 16.8MB/s]
Fetching 7 files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [01:49<00:00, 15.61s/it]
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:00<00:00, 17.65it/s]
  0%|                                                                                                                                       | 0/200 [00:00<?, ?it/s]
Process Process-1:1-of-00007.bin:  74%|█████████████████████████████████████████████████████████████████▏                      | 1.35G/1.83G [01:37<00:26, 17.8MB/s]
Traceback (most recent call last): 78%|████████████████████████████████████████████████████████████████████▋                   | 1.43G/1.83G [01:42<00:24, 16.3MB/s]
  File "/usr/lib/python3.12/multiprocessing/process.py", line 314, in _bootstrap██████████████████████████████████████▊        | 1.66G/1.83G [01:47<00:03, 54.0MB/s]
    self.run()00001-of-00007.bin: 100%|███████████████████████████████████████████████████████████████████████████████████████▊| 1.82G/1.83G [01:48<00:00, 93.7MB/s]
  File "/usr/lib/python3.12/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/root/wayne/kvcache/LongBench/LongBench/pred.py", line 57, in get_pred
    tokenized_prompt = tokenizer(prompt, truncation=False, return_tensors="pt").input_ids[0]
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/wayne/kvcache/LongBench/lib/python3.12/site-packages/transformers/tokenization_utils_base.py", line 2887, in __call__
    encodings = self._call_one(text=text, text_pair=text_pair, **all_kwargs)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/wayne/kvcache/LongBench/lib/python3.12/site-packages/transformers/tokenization_utils_base.py", line 2997, in _call_one
    return self.encode_plus(
           ^^^^^^^^^^^^^^^^^
  File "/root/wayne/kvcache/LongBench/lib/python3.12/site-packages/transformers/tokenization_utils_base.py", line 3073, in encode_plus
    return self._encode_plus(
           ^^^^^^^^^^^^^^^^^^
  File "/root/wayne/kvcache/LongBench/lib/python3.12/site-packages/transformers/tokenization_utils.py", line 803, in _encode_plus
    return self.prepare_for_model(
           ^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/wayne/kvcache/LongBench/lib/python3.12/site-packages/transformers/tokenization_utils_base.py", line 3569, in prepare_for_model
    encoded_inputs = self.pad(
                     ^^^^^^^^^
  File "/root/wayne/kvcache/LongBench/lib/python3.12/site-packages/transformers/tokenization_utils_base.py", line 3371, in pad
    encoded_inputs = self._pad(
                     ^^^^^^^^^^
TypeError: ChatGLMTokenizer._pad() got an unexpected keyword argument 'padding_side'
0000.parquet: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.86M/1.86M [00:00<00:00, 7.40MB/s]
Generating test split: 100%|████████████████████████████████████████████████████████████████████████████████████████████| 200/200 [00:00<00:00, 10554.36 examples/s]
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:00<00:00, 12.98it/s]
  0%|                                                                                                                                       | 0/200 [00:00<?, ?it/s]
Process Process-2:
Traceback (most recent call last):
  File "/usr/lib/python3.12/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/usr/lib/python3.12/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/root/wayne/kvcache/LongBench/LongBench/pred.py", line 57, in get_pred
    tokenized_prompt = tokenizer(prompt, truncation=False, return_tensors="pt").input_ids[0]
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/wayne/kvcache/LongBench/lib/python3.12/site-packages/transformers/tokenization_utils_base.py", line 2887, in __call__
    encodings = self._call_one(text=text, text_pair=text_pair, **all_kwargs)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/wayne/kvcache/LongBench/lib/python3.12/site-packages/transformers/tokenization_utils_base.py", line 2997, in _call_one
    return self.encode_plus(
           ^^^^^^^^^^^^^^^^^
  File "/root/wayne/kvcache/LongBench/lib/python3.12/site-packages/transformers/tokenization_utils_base.py", line 3073, in encode_plus
    return self._encode_plus(
           ^^^^^^^^^^^^^^^^^^
  File "/root/wayne/kvcache/LongBench/lib/python3.12/site-packages/transformers/tokenization_utils.py", line 803, in _encode_plus
    return self.prepare_for_model(
           ^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/wayne/kvcache/LongBench/lib/python3.12/site-packages/transformers/tokenization_utils_base.py", line 3569, in prepare_for_model
    encoded_inputs = self.pad(
                     ^^^^^^^^^
  File "/root/wayne/kvcache/LongBench/lib/python3.12/site-packages/transformers/tokenization_utils_base.py", line 3371, in pad
    encoded_inputs = self._pad(
                     ^^^^^^^^^^
TypeError: ChatGLMTokenizer._pad() got an unexpected keyword argument 'padding_side'
0000.parquet: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.85M/1.85M [00:00<00:00, 108MB/s]
Generating test split: 100%|█████████████████████████████████████████████████████████████████████████████████████████████| 150/150 [00:00<00:00, 8423.76 examples/s]
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:00<00:00, 14.01it/s]
  0%|                                                                                                                                       | 0/150 [00:00<?, ?it/s]
Process Process-3:
Traceback (most recent call last):
  File "/usr/lib/python3.12/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/usr/lib/python3.12/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/root/wayne/kvcache/LongBench/LongBench/pred.py", line 57, in get_pred
    tokenized_prompt = tokenizer(prompt, truncation=False, return_tensors="pt").input_ids[0]
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/wayne/kvcache/LongBench/lib/python3.12/site-packages/transformers/tokenization_utils_base.py", line 2887, in __call__
    encodings = self._call_one(text=text, text_pair=text_pair, **all_kwargs)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/wayne/kvcache/LongBench/lib/python3.12/site-packages/transformers/tokenization_utils_base.py", line 2997, in _call_one
    return self.encode_plus(
           ^^^^^^^^^^^^^^^^^
  File "/root/wayne/kvcache/LongBench/lib/python3.12/site-packages/transformers/tokenization_utils_base.py", line 3073, in encode_plus
    return self._encode_plus(
           ^^^^^^^^^^^^^^^^^^
  File "/root/wayne/kvcache/LongBench/lib/python3.12/site-packages/transformers/tokenization_utils.py", line 803, in _encode_plus
    return self.prepare_for_model(
           ^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/wayne/kvcache/LongBench/lib/python3.12/site-packages/transformers/tokenization_utils_base.py", line 3569, in prepare_for_model
    encoded_inputs = self.pad(
                     ^^^^^^^^^
  File "/root/wayne/kvcache/LongBench/lib/python3.12/site-packages/transformers/tokenization_utils_base.py", line 3371, in pad
    encoded_inputs = self._pad(
                     ^^^^^^^^^^
TypeError: ChatGLMTokenizer._pad() got an unexpected keyword argument 'padding_side'
0000.parquet: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.45M/1.45M [00:00<00:00, 6.23MB/s]
Generating test split: 100%|████████████████████████████████████████████████████████████████████████████████████████████| 200/200 [00:00<00:00, 18596.72 examples/s]
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:00<00:00, 14.23it/s]
  0%|                                                                                                                                       | 0/200 [00:00<?, ?it/s]
Process Process-4:
Traceback (most recent call last):
  File "/usr/lib/python3.12/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/usr/lib/python3.12/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/root/wayne/kvcache/LongBench/LongBench/pred.py", line 57, in get_pred
    tokenized_prompt = tokenizer(prompt, truncation=False, return_tensors="pt").input_ids[0]
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/wayne/kvcache/LongBench/lib/python3.12/site-packages/transformers/tokenization_utils_base.py", line 2887, in __call__
    encodings = self._call_one(text=text, text_pair=text_pair, **all_kwargs)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/wayne/kvcache/LongBench/lib/python3.12/site-packages/transformers/tokenization_utils_base.py", line 2997, in _call_one
    return self.encode_plus(
           ^^^^^^^^^^^^^^^^^
  File "/root/wayne/kvcache/LongBench/lib/python3.12/site-packages/transformers/tokenization_utils_base.py", line 3073, in encode_plus
    return self._encode_plus(
           ^^^^^^^^^^^^^^^^^^
  File "/root/wayne/kvcache/LongBench/lib/python3.12/site-packages/transformers/tokenization_utils.py", line 803, in _encode_plus
    return self.prepare_for_model(
           ^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/wayne/kvcache/LongBench/lib/python3.12/site-packages/transformers/tokenization_utils_base.py", line 3569, in prepare_for_model
    encoded_inputs = self.pad(
                     ^^^^^^^^^
  File "/root/wayne/kvcache/LongBench/lib/python3.12/site-packages/transformers/tokenization_utils_base.py", line 3371, in pad
    encoded_inputs = self._pad(
                     ^^^^^^^^^^
TypeError: ChatGLMTokenizer._pad() got an unexpected keyword argument 'padding_side'
0000.parquet: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 6.62M/6.62M [00:00<00:00, 73.7MB/s]
Generating test split: 100%|█████████████████████████████████████████████████████████████████████████████████████████████| 200/200 [00:00<00:00, 5320.39 examples/s]
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:00<00:00, 13.36it/s]
  0%|                                                                                                                                       | 0/200 [00:00<?, ?it/s]
Process Process-5:
Traceback (most recent call last):
  File "/usr/lib/python3.12/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/usr/lib/python3.12/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/root/wayne/kvcache/LongBench/LongBench/pred.py", line 57, in get_pred
    tokenized_prompt = tokenizer(prompt, truncation=False, return_tensors="pt").input_ids[0]
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/wayne/kvcache/LongBench/lib/python3.12/site-packages/transformers/tokenization_utils_base.py", line 2887, in __call__
    encodings = self._call_one(text=text, text_pair=text_pair, **all_kwargs)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/wayne/kvcache/LongBench/lib/python3.12/site-packages/transformers/tokenization_utils_base.py", line 2997, in _call_one
    return self.encode_plus(
           ^^^^^^^^^^^^^^^^^
  File "/root/wayne/kvcache/LongBench/lib/python3.12/site-packages/transformers/tokenization_utils_base.py", line 3073, in encode_plus
    return self._encode_plus(
           ^^^^^^^^^^^^^^^^^^
  File "/root/wayne/kvcache/LongBench/lib/python3.12/site-packages/transformers/tokenization_utils.py", line 803, in _encode_plus
    return self.prepare_for_model(
           ^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/wayne/kvcache/LongBench/lib/python3.12/site-packages/transformers/tokenization_utils_base.py", line 3569, in prepare_for_model
    encoded_inputs = self.pad(
                     ^^^^^^^^^
  File "/root/wayne/kvcache/LongBench/lib/python3.12/site-packages/transformers/tokenization_utils_base.py", line 3371, in pad
    encoded_inputs = self._pad(
                     ^^^^^^^^^^
TypeError: ChatGLMTokenizer._pad() got an unexpected keyword argument 'padding_side'
0000.parquet: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3.59M/3.59M [00:00<00:00, 72.9MB/s]
Generating test split: 100%|█████████████████████████████████████████████████████████████████████████████████████████████| 200/200 [00:00<00:00, 9090.48 examples/s]
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:00<00:00, 13.71it/s]
  0%|                                                                                                                                       | 0/200 [00:00<?, ?it/s]
Process Process-6:
Traceback (most recent call last):
  File "/usr/lib/python3.12/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/usr/lib/python3.12/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/root/wayne/kvcache/LongBench/LongBench/pred.py", line 57, in get_pred
    tokenized_prompt = tokenizer(prompt, truncation=False, return_tensors="pt").input_ids[0]
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/wayne/kvcache/LongBench/lib/python3.12/site-packages/transformers/tokenization_utils_base.py", line 2887, in __call__
    encodings = self._call_one(text=text, text_pair=text_pair, **all_kwargs)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/wayne/kvcache/LongBench/lib/python3.12/site-packages/transformers/tokenization_utils_base.py", line 2997, in _call_one
    return self.encode_plus(
           ^^^^^^^^^^^^^^^^^
  File "/root/wayne/kvcache/LongBench/lib/python3.12/site-packages/transformers/tokenization_utils_base.py", line 3073, in encode_plus
    return self._encode_plus(
           ^^^^^^^^^^^^^^^^^^
  File "/root/wayne/kvcache/LongBench/lib/python3.12/site-packages/transformers/tokenization_utils.py", line 803, in _encode_plus
    return self.prepare_for_model(
           ^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/wayne/kvcache/LongBench/lib/python3.12/site-packages/transformers/tokenization_utils_base.py", line 3569, in prepare_for_model
    encoded_inputs = self.pad(
                     ^^^^^^^^^
  File "/root/wayne/kvcache/LongBench/lib/python3.12/site-packages/transformers/tokenization_utils_base.py", line 3371, in pad
    encoded_inputs = self._pad(
                     ^^^^^^^^^^
TypeError: ChatGLMTokenizer._pad() got an unexpected keyword argument 'padding_side'
0000.parquet: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 8.12M/8.12M [00:00<00:00, 55.3MB/s]
Generating test split: 100%|█████████████████████████████████████████████████████████████████████████████████████████████| 200/200 [00:00<00:00, 3674.73 examples/s]
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:00<00:00, 16.09it/s]
  0%|                                                                                                                                       | 0/200 [00:00<?, ?it/s]
Process Process-7:
Traceback (most recent call last):
  File "/usr/lib/python3.12/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/usr/lib/python3.12/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/root/wayne/kvcache/LongBench/LongBench/pred.py", line 57, in get_pred
    tokenized_prompt = tokenizer(prompt, truncation=False, return_tensors="pt").input_ids[0]
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/wayne/kvcache/LongBench/lib/python3.12/site-packages/transformers/tokenization_utils_base.py", line 2887, in __call__
    encodings = self._call_one(text=text, text_pair=text_pair, **all_kwargs)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/wayne/kvcache/LongBench/lib/python3.12/site-packages/transformers/tokenization_utils_base.py", line 2997, in _call_one
    return self.encode_plus(
           ^^^^^^^^^^^^^^^^^
  File "/root/wayne/kvcache/LongBench/lib/python3.12/site-packages/transformers/tokenization_utils_base.py", line 3073, in encode_plus
    return self._encode_plus(
           ^^^^^^^^^^^^^^^^^^
  File "/root/wayne/kvcache/LongBench/lib/python3.12/site-packages/transformers/tokenization_utils.py", line 803, in _encode_plus
    return self.prepare_for_model(
           ^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/wayne/kvcache/LongBench/lib/python3.12/site-packages/transformers/tokenization_utils_base.py", line 3569, in prepare_for_model
    encoded_inputs = self.pad(
                     ^^^^^^^^^
  File "/root/wayne/kvcache/LongBench/lib/python3.12/site-packages/transformers/tokenization_utils_base.py", line 3371, in pad
    encoded_inputs = self._pad(
                     ^^^^^^^^^^
TypeError: ChatGLMTokenizer._pad() got an unexpected keyword argument 'padding_side'
0000.parquet: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5.17M/5.17M [00:00<00:00, 53.7MB/s]
Generating test split: 100%|█████████████████████████████████████████████████████████████████████████████████████████████| 200/200 [00:00<00:00, 7017.88 examples/s]
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:00<00:00, 17.04it/s]
  0%|                                                                                                                                       | 0/200 [00:00<?, ?it/s]
Process Process-8:

KeyboardInterrupt

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions