Skip to content

Commit 1d92a90

Browse files
Explain why there are more tokens, than reviews (#476)
* Explain why there are more tokens, than reviews * Update chapters/en/chapter5/3.mdx --------- Co-authored-by: lewtun <[email protected]>
1 parent 32bfdff commit 1d92a90

File tree

1 file changed

+1
-1
lines changed
  • chapters/en/chapter5

1 file changed

+1
-1
lines changed

chapters/en/chapter5/3.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -387,7 +387,7 @@ ArrowInvalid: Column 1 named condition expected length 1463 but got length 1000
387387

388388
Oh no! That didn't work! Why not? Looking at the error message will give us a clue: there is a mismatch in the lengths of one of the columns, one being of length 1,463 and the other of length 1,000. If you've looked at the `Dataset.map()` [documentation](https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.map), you may recall that it's the number of samples passed to the function that we are mapping; here those 1,000 examples gave 1,463 new features, resulting in a shape error.
389389

390-
The problem is that we're trying to mix two different datasets of different sizes: the `drug_dataset` columns will have a certain number of examples (the 1,000 in our error), but the `tokenized_dataset` we are building will have more (the 1,463 in the error message). That doesn't work for a `Dataset`, so we need to either remove the columns from the old dataset or make them the same size as they are in the new dataset. We can do the former with the `remove_columns` argument:
390+
The problem is that we're trying to mix two different datasets of different sizes: the `drug_dataset` columns will have a certain number of examples (the 1,000 in our error), but the `tokenized_dataset` we are building will have more (the 1,463 in the error message; it is more than 1,000 because we are tokenizing long reviews into more than one example by using `return_overflowing_tokens=True`). That doesn't work for a `Dataset`, so we need to either remove the columns from the old dataset or make them the same size as they are in the new dataset. We can do the former with the `remove_columns` argument:
391391

392392
```py
393393
tokenized_dataset = drug_dataset.map(

0 commit comments

Comments
 (0)